modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-14 12:27:51
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
520 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-14 12:25:52
card
stringlengths
11
1.01M
TheBloke/genz-13B-v2-GPTQ
TheBloke
2023-10-14T23:36:43Z
17
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "base_model:budecosystem/genz-13b-v2", "base_model:quantized:budecosystem/genz-13b-v2", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-14T22:55:54Z
--- base_model: budecosystem/genz-13b-v2 inference: false language: - en library_name: transformers license: llama2 model_creator: Bud model_name: GenZ 13B v2 model_type: llama pipeline_tag: text-generation prompt_template: '### User: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # GenZ 13B v2 - GPTQ - Model creator: [Bud](https://huggingface.co/budecosystem) - Original model: [GenZ 13B v2](https://huggingface.co/budecosystem/genz-13b-v2) <!-- description start --> ## Description This repo contains GPTQ model files for [Bud's GenZ 13B v2](https://huggingface.co/budecosystem/genz-13b-v2). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/genz-13B-v2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/genz-13B-v2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/genz-13B-v2-GGUF) * [Bud's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/budecosystem/genz-13b-v2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: User-Assistant-Newlines ``` ### User: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/genz-13B-v2-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/genz-13B-v2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/genz-13B-v2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/genz-13B-v2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/genz-13B-v2-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 14.54 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/genz-13B-v2-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/genz-13B-v2-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/genz-13B-v2-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `genz-13B-v2-GPTQ`: ```shell mkdir genz-13B-v2-GPTQ huggingface-cli download TheBloke/genz-13B-v2-GPTQ --local-dir genz-13B-v2-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir genz-13B-v2-GPTQ huggingface-cli download TheBloke/genz-13B-v2-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir genz-13B-v2-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir genz-13B-v2-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/genz-13B-v2-GPTQ --local-dir genz-13B-v2-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/genz-13B-v2-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/genz-13B-v2-GPTQ`. - To download from a specific branch, enter for example `TheBloke/genz-13B-v2-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `genz-13B-v2-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/genz-13B-v2-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''### User: {prompt} ### Assistant: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/genz-13B-v2-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''### User: {prompt} ### Assistant: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Bud's GenZ 13B v2 --- <div align="center"><h1 align="center">~ GenZ ~</h1><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/genz-logo.png" width=150></div> <p align="center"><i>Democratizing access to LLMs for the open-source community.<br>Let's advance AI, together. </i></p> --- ## Introduction 🎉 Welcome to **GenZ**, an advanced Large Language Model (LLM) fine-tuned on the foundation of Meta's open-source Llama V2 13B parameter model. At Bud Ecosystem, we believe in the power of open-source collaboration to drive the advancement of technology at an accelerated pace. Our vision is to democratize access to fine-tuned LLMs, and to that end, we will be releasing a series of models across different parameter counts (7B, 13B, and 70B) and quantizations (32-bit and 4-bit) for the open-source community to use, enhance, and build upon. <p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/MTBench_CompareChart_28July2023.png" width="500"></p> The smaller quantization version of our models makes them more accessible, enabling their use even on personal computers. This opens up a world of possibilities for developers, researchers, and enthusiasts to experiment with these models and contribute to the collective advancement of language model technology. GenZ isn't just a powerful text generator—it's a sophisticated AI assistant, capable of understanding and responding to user prompts with high-quality responses. We've taken the robust capabilities of Llama V2 and fine-tuned them to offer a more user-focused experience. Whether you're seeking informative responses or engaging interactions, GenZ is designed to deliver. And this isn't the end. It's just the beginning of a journey towards creating more advanced, more efficient, and more accessible language models. We invite you to join us on this exciting journey. 🚀 --- <h2>Milestone Releases ️🏁</h2> **[27 July 2023]** [_GenZ-13B V2 (ggml)_](https://huggingface.co/budecosystem/genz-13b-v2-ggml) : Announcing our GenZ-13B v2 with ggml. This variant of GenZ can run inferencing using only CPU and without the need of GPU. Download the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2-ggml). **[27 July 2023]** [_GenZ-13B V2 (4-bit)_](https://huggingface.co/budecosystem/genz-13b-v2-4bit) : Announcing our GenZ-13B v2 with 4-bit quantisation. Enabling inferencing with much lesser GPU memory than the 32-bit variant. Download the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2-4bit). **[26 July 2023]** [_GenZ-13B V2_](https://huggingface.co/budecosystem/genz-13b-v2) : We're excited to announce the release of our Genz 13B v2 model, a step forward with improved evaluation results compared to v1. Experience the advancements by downloading the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2). **[20 July 2023]** [_GenZ-13B_](https://huggingface.co/budecosystem/genz-13b) : We marked an important milestone with the release of the Genz 13B model. The journey began here, and you can partake in it by downloading the model from [Hugging Face](https://huggingface.co/budecosystem/genz-13b). --- <img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/screenshot_genz13bv2.png" width="100%"> | ![Python](https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/Python.gif) | ![Poem](https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/Poem.gif) | ![Email](https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/Email.gif) |:--:|:--:|:--:| | *Code Generation* | *Poem Generation* | *Email Generation* | <!-- <p align="center"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Python.gif" width="33%" alt="Python Code"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Poem.gif" width="33%"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Email.gif" width="33%"></p> --> <h2>Getting Started on Hugging Face 🤗</h2> Getting up and running with our models on Hugging Face is a breeze. Follow these steps: <h3>1️⃣ : Import necessary modules</h3> Start by importing the necessary modules from the ‘transformers’ library and ‘torch’. ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM ``` <h3>2️⃣ : Load the tokenizer and the model</h3> Next, load up the tokenizer and the model for ‘budecosystem/genz-13b-v2’ from Hugging Face using the ‘from_pretrained’ method. ```python tokenizer = AutoTokenizer.from_pretrained("budecosystem/genz-13b-v2", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("budecosystem/genz-13b-v2", torch_dtype=torch.bfloat16) ``` <h3>3️⃣ : Generate responses</h3> Now that you have the model and tokenizer, you're ready to generate responses. Here's how you can do it: ```python inputs = tokenizer("The meaning of life is", return_tensors="pt") sample = model.generate(**inputs, max_length=128) print(tokenizer.decode(sample[0])) ``` In this example, "The meaning of life is" is the prompt template used for inference. You can replace it with any string you like. Want to interact with the model in a more intuitive way? We have a Gradio interface set up for that. Head over to our GitHub page, clone the repository, and run the ‘generate.py’ script to try it out. Happy experimenting! 😄 <h2>Fine-tuning 🎯</h2> It's time to upgrade the model by fine-tuning the model. You can do this using our provided finetune.py script. Here's an example command: ```bash python finetune.py \ --model_name meta-llama/Llama-2-13b \ --data_path dataset.json \ --output_dir output \ --trust_remote_code \ --prompt_column instruction \ --response_column output \ --pad_token_id 50256 ``` --- <h2 >Bonus: Colab Notebooks 📚 <b><i>(WIP)</i></b></h2> Looking for an even simpler way to get started with GenZ? We've got you covered. We've prepared a pair of detailed Colab notebooks - one for Inference and one for Fine-tuning. These notebooks come pre-filled with all the information and code you'll need. All you'll have to do is run them! Keep an eye out for these notebooks. They'll be added to the repository soon! --- <h2>Why Use GenZ? 💡</h2> You might be wondering, "Why should I choose GenZ over a pretrained model?" The answer lies in the extra mile we've gone to fine-tune our models. While pretrained models are undeniably powerful, GenZ brings something extra to the table. We've fine-tuned it with curated datasets, which means it has additional skills and capabilities beyond what a pretrained model can offer. Whether you need it for a simple task or a complex project, GenZ is up for the challenge. What's more, we are committed to continuously enhancing GenZ. We believe in the power of constant learning and improvement. That's why we'll be regularly fine-tuning our models with various curated datasets to make them even better. Our goal is to reach the state of the art and beyond - and we're committed to staying the course until we get there. But don't just take our word for it. We've provided detailed evaluations and performance details in a later section, so you can see the difference for yourself. Choose GenZ and join us on this journey. Together, we can push the boundaries of what's possible with large language models. --- <h2>Model Card for GenZ 13B 📄</h2> Here's a quick overview of everything you need to know about GenZ 13B. <h3>Model Details:</h3> - Developed by: Bud Ecosystem - Base pretrained model type: Llama V2 13B - Model Architecture: GenZ 13B, fine-tuned on Llama V2 13B, is an auto-regressive language model that employs an optimized transformer architecture. The fine-tuning process for GenZ 13B leveraged Supervised Fine-Tuning (SFT) - License: The model is available for commercial use under a custom commercial license. For more information, please visit: [Meta AI Model and Library Downloads](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) --- <h2>Intended Use 💼</h2> When we created GenZ 13B, we had a clear vision of how it could be used to push the boundaries of what's possible with large language models. We also understand the importance of using such models responsibly. Here's a brief overview of the intended and out-of-scope uses for GenZ 13B. <h3>Direct Use</h3> GenZ 13B is designed to be a powerful tool for research on large language models. It's also an excellent foundation for further specialization and fine-tuning for specific use cases, such as: - Text summarization - Text generation - Chatbot creation - And much more! <h3>Out-of-Scope Use 🚩</h3> While GenZ 13B is versatile, there are certain uses that are out of scope: - Production use without adequate assessment of risks and mitigation - Any use cases which may be considered irresponsible or harmful - Use in any manner that violates applicable laws or regulations, including trade compliance laws - Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2 Remember, GenZ 13B, like any large language model, is trained on a large-scale corpora representative of the web, and therefore, may carry the stereotypes and biases commonly encountered online. <h3>Recommendations 🧠</h3> We recommend users of GenZ 13B to consider fine-tuning it for the specific set of tasks of interest. Appropriate precautions and guardrails should be taken for any production use. Using GenZ 13B responsibly is key to unlocking its full potential while maintaining a safe and respectful environment. --- <h2>Training Details 📚</h2> When fine-tuning GenZ 13B, we took a meticulous approach to ensure we were building on the solid base of the pretrained Llama V2 13B model in the most effective way. Here's a look at the key details of our training process: <h3>Fine-Tuning Training Data</h3> For the fine-tuning process, we used a carefully curated mix of datasets. These included data from OpenAssistant, an instruction fine-tuning dataset, and Thought Source for the Chain Of Thought (CoT) approach. This diverse mix of data sources helped us enhance the model's capabilities across a range of tasks. <h3>Fine-Tuning Procedure</h3> We performed a full-parameter fine-tuning using Supervised Fine-Tuning (SFT). This was carried out on 4 A100 80GB GPUs, and the process took under 100 hours. To make the process more efficient, we used DeepSpeed's ZeRO-3 optimization. <h3>Tokenizer</h3> We used the SentencePiece tokenizer during the fine-tuning process. This tokenizer is known for its capability to handle open-vocabulary language tasks efficiently. <h3>Hyperparameters</h3> Here are the hyperparameters we used for fine-tuning: | Hyperparameter | Value | | -------------- | ----- | | Warmup Ratio | 0.04 | | Learning Rate Scheduler Type | Cosine | | Learning Rate | 2e-5 | | Number of Training Epochs | 3 | | Per Device Training Batch Size | 4 | | Gradient Accumulation Steps | 4 | | Precision | FP16 | | Optimizer | AdamW | --- <h2>Evaluations 🎯</h2> Evaluating our model is a key part of our fine-tuning process. It helps us understand how our model is performing and how it stacks up against other models. Here's a look at some of the key evaluations for GenZ 13B: <h3>Benchmark Comparison</h3> We've compared GenZ V1 with V2 to understand the improvements our fine-tuning has achieved. | Model Name | MT Bench | Vicuna Bench | MMLU | Human Eval | Hellaswag | BBH | |:----------:|:--------:|:------------:|:----:|:----------:|:---------:|:----:| | Genz 13B | 6.12 | 86.1 | 53.62| 17.68 | 77.38 | 37.76| | Genz 13B v2| 6.79 | 87.2 | 53.68| 21.95 | 77.48 | 38.1 | <h3>MT Bench Score</h3> A key evaluation metric we use is the MT Bench score. This score provides a comprehensive assessment of our model's performance across a range of tasks. We're proud to say that our model performs at a level that's close to the Llama-70B-chat model on the MT Bench and top of the list among 13B models. <p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/mt_bench_score.png" width="500"></p> In the transition from GenZ V1 to V2, we noticed some fascinating performance shifts. While we saw a slight dip in coding performance, two other areas, Roleplay and Math, saw noticeable improvements. --- <h2>Looking Ahead 👀</h2> We're excited about the journey ahead with GenZ. We're committed to continuously improving and enhancing our models, and we're excited to see what the open-source community will build with them. We believe in the power of collaboration, and we can't wait to see what we can achieve together. Remember, we're just getting started. This is just the beginning of a journey that we believe will revolutionize the world of large language models. We invite you to join us on this exciting journey. Together, we can push the boundaries of what's possible with AI. 🚀 --- Check the GitHub for the code -> [GenZ](https://raw.githubusercontent.com/BudEcosystem/GenZ)
TheBloke/tora-7B-v1.0-GGUF
TheBloke
2023-10-14T23:33:17Z
62
0
transformers
[ "transformers", "gguf", "llama", "code", "math", "text-generation", "en", "dataset:gsm8k", "dataset:competition_math", "arxiv:2309.17452", "base_model:llm-agents/tora-7b-v1.0", "base_model:quantized:llm-agents/tora-7b-v1.0", "license:llama2", "region:us" ]
text-generation
2023-10-14T23:29:23Z
--- base_model: llm-agents/tora-7b-v1.0 datasets: - gsm8k - competition_math inference: false language: - en library_name: transformers license: llama2 metrics: - exact_match model_creator: LLM-Agents model_name: ToRA 7B v1.0 model_type: llama pipeline_tag: text-generation prompt_template: '<|user|> {prompt} <|assistant|> ' quantized_by: TheBloke tags: - code - math --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # ToRA 7B v1.0 - GGUF - Model creator: [LLM-Agents](https://huggingface.co/llm-agents) - Original model: [ToRA 7B v1.0](https://huggingface.co/llm-agents/tora-7b-v1.0) <!-- description start --> ## Description This repo contains GGUF format model files for [LLM-Agents's ToRA 7B v1.0](https://huggingface.co/llm-agents/tora-7b-v1.0). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/tora-7B-v1.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/tora-7B-v1.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/tora-7B-v1.0-GGUF) * [LLM-Agents's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/llm-agents/tora-7b-v1.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ToRA ``` <|user|> {prompt} <|assistant|> ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [tora-7b-v1.0.Q2_K.gguf](https://huggingface.co/TheBloke/tora-7B-v1.0-GGUF/blob/main/tora-7b-v1.0.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [tora-7b-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/tora-7B-v1.0-GGUF/blob/main/tora-7b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [tora-7b-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/tora-7B-v1.0-GGUF/blob/main/tora-7b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [tora-7b-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/tora-7B-v1.0-GGUF/blob/main/tora-7b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [tora-7b-v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/tora-7B-v1.0-GGUF/blob/main/tora-7b-v1.0.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [tora-7b-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/tora-7B-v1.0-GGUF/blob/main/tora-7b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [tora-7b-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/tora-7B-v1.0-GGUF/blob/main/tora-7b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [tora-7b-v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/tora-7B-v1.0-GGUF/blob/main/tora-7b-v1.0.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [tora-7b-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/tora-7B-v1.0-GGUF/blob/main/tora-7b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [tora-7b-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/tora-7B-v1.0-GGUF/blob/main/tora-7b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [tora-7b-v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/tora-7B-v1.0-GGUF/blob/main/tora-7b-v1.0.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [tora-7b-v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/tora-7B-v1.0-GGUF/blob/main/tora-7b-v1.0.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/tora-7B-v1.0-GGUF and below it, a specific filename to download, such as: tora-7b-v1.0.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/tora-7B-v1.0-GGUF tora-7b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/tora-7B-v1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/tora-7B-v1.0-GGUF tora-7b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m tora-7b-v1.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|user|>\n{prompt}\n<|assistant|>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/tora-7B-v1.0-GGUF", model_file="tora-7b-v1.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: LLM-Agents's ToRA 7B v1.0 <h1 align="center"> ToRA: A Tool-Integrated Reasoning Agent <br> for Mathematical Problem Solving </h1> <p align="center"> <a href="https://microsoft.github.io/ToRA/"><b>[🌐 Website]</b></a> • <a href="https://arxiv.org/pdf/2309.17452.pdf"><b>[📜 Paper]</b></a> • <a href="https://huggingface.co/llm-agents"><b>[🤗 HF Models]</b></a> • <a href="https://github.com/microsoft/ToRA"><b>[🐱 GitHub]</b></a> <br> <a href="https://twitter.com/zhs05232838/status/1708860992631763092"><b>[🐦 Twitter]</b></a> • <a href="https://www.reddit.com/r/LocalLLaMA/comments/1703k6d/tora_a_toolintegrated_reasoning_agent_for/"><b>[💬 Reddit]</b></a> • <a href="https://notes.aimodels.fyi/researchers-announce-tora-training-language-models-to-better-understand-math-using-external-tools/">[🍀 Unofficial Blog]</a> <!-- <a href="#-quick-start">Quick Start</a> • --> <!-- <a href="#%EF%B8%8F-citation">Citation</a> --> </p> <p align="center"> Repo for "<a href="https://arxiv.org/pdf/2309.17452.pdf" target="_blank">ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving</a>" </p> ## 🔥 News - [2023/10/08] 🔥🔥🔥 All ToRA models released at [HuggingFace](https://huggingface.co/llm-agents)!!! - [2023/09/29] ToRA paper, repo, and website released. ## 💡 Introduction ToRA is a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical reasoning problems by interacting with tools, e.g., computation libraries and symbolic solvers. ToRA series seamlessly integrate natural language reasoning with the utilization of external tools, thereby amalgamating the analytical prowess of language and the computational efficiency of external tools. | Model | Size | GSM8k | MATH | AVG@10 math tasks<sup>&dagger;</sup> | |---|---|---|---|---| | GPT-4 | - | 92.0 | 42.5 | 78.3 | | GPT-4 (PAL) | - | 94.2 | 51.8 | 86.4 | | [ToRA-7B](https://huggingface.co/llm-agents/tora-7b-v1.0) | 7B | 68.8 | 40.1 | 62.4| | [ToRA-Code-7B](https://huggingface.co/llm-agents/tora-code-7b-v1.0) | 7B | 72.6 | 44.6 | 66.5| | [ToRA-13B](https://huggingface.co/llm-agents/tora-13b-v1.0) | 13B | 72.7 | 43.0 | 65.9| | [ToRA-Code-13B](https://huggingface.co/llm-agents/tora-code-13b-v1.0) | 13B | 75.8 | 48.1 | 71.3 | | [ToRA-Code-34B<sup>*</sup>](https://huggingface.co/llm-agents/tora-code-34b-v1.0) | 34B | 80.7 | **51.0** | 74.8 | | [ToRA-70B](https://huggingface.co/llm-agents/tora-70b-v1.0) | 70B | **84.3** | 49.7 | **76.9** | - <sup>*</sup>ToRA-Code-34B is currently the first and only open-source model to achieve over 50% accuracy (pass@1) on the MATH dataset, which significantly outperforms GPT-4’s CoT result (51.0 vs. 42.5), and is competitive with GPT-4 solving problems with programs. By open-sourcing our codes and models, we hope more breakthroughs will come! - <sup>&dagger;</sup>10 math tasks include GSM8k, MATH, GSM-Hard, SVAMP, TabMWP, ASDiv, SingleEQ, SingleOP, AddSub, and MultiArith. ## ⚡️ Training The models are trained on ToRA-Corpus 16k, which contains tool-integrated reasoning trajectories of MATH and GSM8k from GPT-4. We use imitation learning (i.e., SFT) to fine-tune the models, and then apply our proposed *output space shaping* to improve tool-integrated reasoning behaviors. Please refer to the [paper](https://arxiv.org/pdf/2309.17452.pdf) for more details. ## 🪁 Inference & Evaluation Please refer to ToRA's [GitHub repo](https://github.com/microsoft/ToRA) for inference, evaluation, and training code. ## ☕️ Citation If you find this repository helpful, please consider citing our paper: ``` @misc{gou2023tora, title={ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving}, author={Zhibin Gou and Zhihong Shao and Yeyun Gong and yelong shen and Yujiu Yang and Minlie Huang and Nan Duan and Weizhu Chen}, year={2023}, eprint={2309.17452}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- original-model-card end -->
MaxReynolds/SouderRocketLauncherNetCombined_300
MaxReynolds
2023-10-14T23:29:26Z
3
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dataset:MaxReynolds/Lee_Souder_Combined", "base_model:CompVis/stable-diffusion-v1-2", "base_model:finetune:CompVis/stable-diffusion-v1-2", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-14T23:12:02Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-2 datasets: - MaxReynolds/Lee_Souder_Combined tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Text-to-image finetuning - MaxReynolds/SouderRocketLauncherNetCombined_300 This pipeline was finetuned from **CompVis/stable-diffusion-v1-2** on the **MaxReynolds/Lee_Souder_Combined** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['Rocket Launcher by Lee Souder']: ![val_imgs_grid](./val_imgs_grid.png) ## Pipeline usage You can use the pipeline like so: ```python from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained("MaxReynolds/SouderRocketLauncherNetCombined_300", torch_dtype=torch.float16) prompt = "Rocket Launcher by Lee Souder" image = pipeline(prompt).images[0] image.save("my_image.png") ``` ## Training info These are the key hyperparameters used during training: * Epochs: 30 * Learning rate: 1e-05 * Batch size: 1 * Gradient accumulation steps: 4 * Image resolution: 512 * Mixed-precision: fp16 More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/max-f-reynolds/text2image-fine-tune/runs/qlfa2vzh).
TheBloke/tora-70B-v1.0-GPTQ
TheBloke
2023-10-14T23:28:45Z
11
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "math", "en", "dataset:gsm8k", "dataset:competition_math", "arxiv:2309.17452", "base_model:llm-agents/tora-70b-v1.0", "base_model:quantized:llm-agents/tora-70b-v1.0", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-14T20:11:27Z
--- base_model: llm-agents/tora-70b-v1.0 datasets: - gsm8k - competition_math inference: false language: - en library_name: transformers license: llama2 metrics: - exact_match model_creator: LLM-Agents model_name: ToRA 70B v1.0 model_type: llama pipeline_tag: text-generation prompt_template: '<|user|> {prompt} <|assistant|> ' quantized_by: TheBloke tags: - code - math --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # ToRA 70B v1.0 - GPTQ - Model creator: [LLM-Agents](https://huggingface.co/llm-agents) - Original model: [ToRA 70B v1.0](https://huggingface.co/llm-agents/tora-70b-v1.0) <!-- description start --> ## Description This repo contains GPTQ model files for [LLM-Agents's ToRA 70B v1.0](https://huggingface.co/llm-agents/tora-70b-v1.0). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/tora-70B-v1.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/tora-70B-v1.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/tora-70B-v1.0-GGUF) * [LLM-Agents's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/llm-agents/tora-70b-v1.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ToRA ``` <|user|> {prompt} <|assistant|> ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/tora-70B-v1.0-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [CamelAI Math](https://huggingface.co/datasets/andersonbcdefg/math) | 4096 | 35.33 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/tora-70B-v1.0-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [CamelAI Math](https://huggingface.co/datasets/andersonbcdefg/math) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/tora-70B-v1.0-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [CamelAI Math](https://huggingface.co/datasets/andersonbcdefg/math) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/tora-70B-v1.0-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [CamelAI Math](https://huggingface.co/datasets/andersonbcdefg/math) | 4096 | 26.78 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/tora-70B-v1.0-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [CamelAI Math](https://huggingface.co/datasets/andersonbcdefg/math) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/tora-70B-v1.0-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [CamelAI Math](https://huggingface.co/datasets/andersonbcdefg/math) | 4096 | 31.84 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/tora-70B-v1.0-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/tora-70B-v1.0-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `tora-70B-v1.0-GPTQ`: ```shell mkdir tora-70B-v1.0-GPTQ huggingface-cli download TheBloke/tora-70B-v1.0-GPTQ --local-dir tora-70B-v1.0-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir tora-70B-v1.0-GPTQ huggingface-cli download TheBloke/tora-70B-v1.0-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir tora-70B-v1.0-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir tora-70B-v1.0-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/tora-70B-v1.0-GPTQ --local-dir tora-70B-v1.0-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/tora-70B-v1.0-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/tora-70B-v1.0-GPTQ`. - To download from a specific branch, enter for example `TheBloke/tora-70B-v1.0-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `tora-70B-v1.0-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/tora-70B-v1.0-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|user|> {prompt} <|assistant|> ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/tora-70B-v1.0-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''<|user|> {prompt} <|assistant|> ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: LLM-Agents's ToRA 70B v1.0 <h1 align="center"> ToRA: A Tool-Integrated Reasoning Agent <br> for Mathematical Problem Solving </h1> <p align="center"> <a href="https://microsoft.github.io/ToRA/"><b>[🌐 Website]</b></a> • <a href="https://arxiv.org/abs/2309.17452"><b>[📜 Paper]</b></a> • <a href="https://huggingface.co/llm-agents"><b>[🤗 HF Models]</b></a> • <a href="https://github.com/microsoft/ToRA"><b>[🐱 GitHub]</b></a> <br> <a href="https://twitter.com/zhs05232838/status/1708860992631763092"><b>[🐦 Twitter]</b></a> • <a href="https://www.reddit.com/r/LocalLLaMA/comments/1703k6d/tora_a_toolintegrated_reasoning_agent_for/"><b>[💬 Reddit]</b></a> • <a href="https://notes.aimodels.fyi/researchers-announce-tora-training-language-models-to-better-understand-math-using-external-tools/">[🍀 Unofficial Blog]</a> <!-- <a href="#-quick-start">Quick Start</a> • --> <!-- <a href="#%EF%B8%8F-citation">Citation</a> --> </p> <p align="center"> Repo for "<a href="https://arxiv.org/abs/2309.17452" target="_blank">ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving</a>" </p> ## 🔥 News - [2023/10/08] 🔥🔥🔥 All ToRA models released at [HuggingFace](https://huggingface.co/llm-agents)!!! - [2023/09/29] ToRA paper, repo, and website released. ## 💡 Introduction ToRA is a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical reasoning problems by interacting with tools, e.g., computation libraries and symbolic solvers. ToRA series seamlessly integrate natural language reasoning with the utilization of external tools, thereby amalgamating the analytical prowess of language and the computational efficiency of external tools. | Model | Size | GSM8k | MATH | AVG@10 math tasks<sup>&dagger;</sup> | |---|---|---|---|---| | GPT-4 | - | 92.0 | 42.5 | 78.3 | | GPT-4 (PAL) | - | 94.2 | 51.8 | 86.4 | | [ToRA-7B](https://huggingface.co/llm-agents/tora-7b-v1.0) | 7B | 68.8 | 40.1 | 62.4| | [ToRA-Code-7B](https://huggingface.co/llm-agents/tora-code-7b-v1.0) | 7B | 72.6 | 44.6 | 66.5| | [ToRA-13B](https://huggingface.co/llm-agents/tora-13b-v1.0) | 13B | 72.7 | 43.0 | 65.9| | [ToRA-Code-13B](https://huggingface.co/llm-agents/tora-code-13b-v1.0) | 13B | 75.8 | 48.1 | 71.3 | | [ToRA-Code-34B<sup>*</sup>](https://huggingface.co/llm-agents/tora-code-34b-v1.0) | 34B | 80.7 | **51.0** | 74.8 | | [ToRA-70B](https://huggingface.co/llm-agents/tora-70b-v1.0) | 70B | **84.3** | 49.7 | **76.9** | - <sup>*</sup>ToRA-Code-34B is currently the first and only open-source model to achieve over 50% accuracy (pass@1) on the MATH dataset, which significantly outperforms GPT-4’s CoT result (51.0 vs. 42.5), and is competitive with GPT-4 solving problems with programs. By open-sourcing our codes and models, we hope more breakthroughs will come! - <sup>&dagger;</sup>10 math tasks include GSM8k, MATH, GSM-Hard, SVAMP, TabMWP, ASDiv, SingleEQ, SingleOP, AddSub, and MultiArith. ## ⚡️ Training The models are trained on ToRA-Corpus 16k, which contains tool-integrated reasoning trajectories of MATH and GSM8k from GPT-4. We use imitation learning (i.e., SFT) to fine-tune the models, and then apply our proposed *output space shaping* to improve tool-integrated reasoning behaviors. Please refer to the [paper](https://arxiv.org/pdf/2309.17452.pdf) for more details. ## 🪁 Inference & Evaluation Please refer to ToRA's [GitHub repo](https://github.com/microsoft/ToRA) for inference, evaluation, and training code. ## ☕️ Citation If you find this repository helpful, please consider citing our paper: ``` @misc{gou2023tora, title={ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving}, author={Zhibin Gou and Zhihong Shao and Yeyun Gong and yelong shen and Yujiu Yang and Minlie Huang and Nan Duan and Weizhu Chen}, year={2023}, eprint={2309.17452}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
TheBloke/genz-13B-v2-GGUF
TheBloke
2023-10-14T23:10:41Z
44
1
transformers
[ "transformers", "gguf", "llama", "text-generation", "en", "base_model:budecosystem/genz-13b-v2", "base_model:quantized:budecosystem/genz-13b-v2", "license:llama2", "region:us" ]
text-generation
2023-10-14T22:55:34Z
--- base_model: budecosystem/genz-13b-v2 inference: false language: - en library_name: transformers license: llama2 model_creator: Bud model_name: GenZ 13B v2 model_type: llama pipeline_tag: text-generation prompt_template: '### User: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # GenZ 13B v2 - GGUF - Model creator: [Bud](https://huggingface.co/budecosystem) - Original model: [GenZ 13B v2](https://huggingface.co/budecosystem/genz-13b-v2) <!-- description start --> ## Description This repo contains GGUF format model files for [Bud's GenZ 13B v2](https://huggingface.co/budecosystem/genz-13b-v2). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/genz-13B-v2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/genz-13B-v2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/genz-13B-v2-GGUF) * [Bud's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/budecosystem/genz-13b-v2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: User-Assistant-Newlines ``` ### User: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [genz-13b-v2.Q2_K.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [genz-13b-v2.Q3_K_S.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [genz-13b-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [genz-13b-v2.Q3_K_L.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [genz-13b-v2.Q4_0.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [genz-13b-v2.Q4_K_S.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [genz-13b-v2.Q4_K_M.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [genz-13b-v2.Q5_0.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [genz-13b-v2.Q5_K_S.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [genz-13b-v2.Q5_K_M.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [genz-13b-v2.Q6_K.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [genz-13b-v2.Q8_0.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/genz-13B-v2-GGUF and below it, a specific filename to download, such as: genz-13b-v2.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/genz-13B-v2-GGUF genz-13b-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/genz-13B-v2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/genz-13B-v2-GGUF genz-13b-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m genz-13b-v2.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### User:\n{prompt}\n\n### Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/genz-13B-v2-GGUF", model_file="genz-13b-v2.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Bud's GenZ 13B v2 --- <div align="center"><h1 align="center">~ GenZ ~</h1><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/genz-logo.png" width=150></div> <p align="center"><i>Democratizing access to LLMs for the open-source community.<br>Let's advance AI, together. </i></p> --- ## Introduction 🎉 Welcome to **GenZ**, an advanced Large Language Model (LLM) fine-tuned on the foundation of Meta's open-source Llama V2 13B parameter model. At Bud Ecosystem, we believe in the power of open-source collaboration to drive the advancement of technology at an accelerated pace. Our vision is to democratize access to fine-tuned LLMs, and to that end, we will be releasing a series of models across different parameter counts (7B, 13B, and 70B) and quantizations (32-bit and 4-bit) for the open-source community to use, enhance, and build upon. <p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/MTBench_CompareChart_28July2023.png" width="500"></p> The smaller quantization version of our models makes them more accessible, enabling their use even on personal computers. This opens up a world of possibilities for developers, researchers, and enthusiasts to experiment with these models and contribute to the collective advancement of language model technology. GenZ isn't just a powerful text generator—it's a sophisticated AI assistant, capable of understanding and responding to user prompts with high-quality responses. We've taken the robust capabilities of Llama V2 and fine-tuned them to offer a more user-focused experience. Whether you're seeking informative responses or engaging interactions, GenZ is designed to deliver. And this isn't the end. It's just the beginning of a journey towards creating more advanced, more efficient, and more accessible language models. We invite you to join us on this exciting journey. 🚀 --- <h2>Milestone Releases ️🏁</h2> **[27 July 2023]** [_GenZ-13B V2 (ggml)_](https://huggingface.co/budecosystem/genz-13b-v2-ggml) : Announcing our GenZ-13B v2 with ggml. This variant of GenZ can run inferencing using only CPU and without the need of GPU. Download the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2-ggml). **[27 July 2023]** [_GenZ-13B V2 (4-bit)_](https://huggingface.co/budecosystem/genz-13b-v2-4bit) : Announcing our GenZ-13B v2 with 4-bit quantisation. Enabling inferencing with much lesser GPU memory than the 32-bit variant. Download the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2-4bit). **[26 July 2023]** [_GenZ-13B V2_](https://huggingface.co/budecosystem/genz-13b-v2) : We're excited to announce the release of our Genz 13B v2 model, a step forward with improved evaluation results compared to v1. Experience the advancements by downloading the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2). **[20 July 2023]** [_GenZ-13B_](https://huggingface.co/budecosystem/genz-13b) : We marked an important milestone with the release of the Genz 13B model. The journey began here, and you can partake in it by downloading the model from [Hugging Face](https://huggingface.co/budecosystem/genz-13b). --- <img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/screenshot_genz13bv2.png" width="100%"> | ![Python](https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/Python.gif) | ![Poem](https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/Poem.gif) | ![Email](https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/Email.gif) |:--:|:--:|:--:| | *Code Generation* | *Poem Generation* | *Email Generation* | <!-- <p align="center"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Python.gif" width="33%" alt="Python Code"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Poem.gif" width="33%"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Email.gif" width="33%"></p> --> <h2>Getting Started on Hugging Face 🤗</h2> Getting up and running with our models on Hugging Face is a breeze. Follow these steps: <h3>1️⃣ : Import necessary modules</h3> Start by importing the necessary modules from the ‘transformers’ library and ‘torch’. ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM ``` <h3>2️⃣ : Load the tokenizer and the model</h3> Next, load up the tokenizer and the model for ‘budecosystem/genz-13b-v2’ from Hugging Face using the ‘from_pretrained’ method. ```python tokenizer = AutoTokenizer.from_pretrained("budecosystem/genz-13b-v2", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("budecosystem/genz-13b-v2", torch_dtype=torch.bfloat16) ``` <h3>3️⃣ : Generate responses</h3> Now that you have the model and tokenizer, you're ready to generate responses. Here's how you can do it: ```python inputs = tokenizer("The meaning of life is", return_tensors="pt") sample = model.generate(**inputs, max_length=128) print(tokenizer.decode(sample[0])) ``` In this example, "The meaning of life is" is the prompt template used for inference. You can replace it with any string you like. Want to interact with the model in a more intuitive way? We have a Gradio interface set up for that. Head over to our GitHub page, clone the repository, and run the ‘generate.py’ script to try it out. Happy experimenting! 😄 <h2>Fine-tuning 🎯</h2> It's time to upgrade the model by fine-tuning the model. You can do this using our provided finetune.py script. Here's an example command: ```bash python finetune.py \ --model_name meta-llama/Llama-2-13b \ --data_path dataset.json \ --output_dir output \ --trust_remote_code \ --prompt_column instruction \ --response_column output \ --pad_token_id 50256 ``` --- <h2 >Bonus: Colab Notebooks 📚 <b><i>(WIP)</i></b></h2> Looking for an even simpler way to get started with GenZ? We've got you covered. We've prepared a pair of detailed Colab notebooks - one for Inference and one for Fine-tuning. These notebooks come pre-filled with all the information and code you'll need. All you'll have to do is run them! Keep an eye out for these notebooks. They'll be added to the repository soon! --- <h2>Why Use GenZ? 💡</h2> You might be wondering, "Why should I choose GenZ over a pretrained model?" The answer lies in the extra mile we've gone to fine-tune our models. While pretrained models are undeniably powerful, GenZ brings something extra to the table. We've fine-tuned it with curated datasets, which means it has additional skills and capabilities beyond what a pretrained model can offer. Whether you need it for a simple task or a complex project, GenZ is up for the challenge. What's more, we are committed to continuously enhancing GenZ. We believe in the power of constant learning and improvement. That's why we'll be regularly fine-tuning our models with various curated datasets to make them even better. Our goal is to reach the state of the art and beyond - and we're committed to staying the course until we get there. But don't just take our word for it. We've provided detailed evaluations and performance details in a later section, so you can see the difference for yourself. Choose GenZ and join us on this journey. Together, we can push the boundaries of what's possible with large language models. --- <h2>Model Card for GenZ 13B 📄</h2> Here's a quick overview of everything you need to know about GenZ 13B. <h3>Model Details:</h3> - Developed by: Bud Ecosystem - Base pretrained model type: Llama V2 13B - Model Architecture: GenZ 13B, fine-tuned on Llama V2 13B, is an auto-regressive language model that employs an optimized transformer architecture. The fine-tuning process for GenZ 13B leveraged Supervised Fine-Tuning (SFT) - License: The model is available for commercial use under a custom commercial license. For more information, please visit: [Meta AI Model and Library Downloads](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) --- <h2>Intended Use 💼</h2> When we created GenZ 13B, we had a clear vision of how it could be used to push the boundaries of what's possible with large language models. We also understand the importance of using such models responsibly. Here's a brief overview of the intended and out-of-scope uses for GenZ 13B. <h3>Direct Use</h3> GenZ 13B is designed to be a powerful tool for research on large language models. It's also an excellent foundation for further specialization and fine-tuning for specific use cases, such as: - Text summarization - Text generation - Chatbot creation - And much more! <h3>Out-of-Scope Use 🚩</h3> While GenZ 13B is versatile, there are certain uses that are out of scope: - Production use without adequate assessment of risks and mitigation - Any use cases which may be considered irresponsible or harmful - Use in any manner that violates applicable laws or regulations, including trade compliance laws - Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2 Remember, GenZ 13B, like any large language model, is trained on a large-scale corpora representative of the web, and therefore, may carry the stereotypes and biases commonly encountered online. <h3>Recommendations 🧠</h3> We recommend users of GenZ 13B to consider fine-tuning it for the specific set of tasks of interest. Appropriate precautions and guardrails should be taken for any production use. Using GenZ 13B responsibly is key to unlocking its full potential while maintaining a safe and respectful environment. --- <h2>Training Details 📚</h2> When fine-tuning GenZ 13B, we took a meticulous approach to ensure we were building on the solid base of the pretrained Llama V2 13B model in the most effective way. Here's a look at the key details of our training process: <h3>Fine-Tuning Training Data</h3> For the fine-tuning process, we used a carefully curated mix of datasets. These included data from OpenAssistant, an instruction fine-tuning dataset, and Thought Source for the Chain Of Thought (CoT) approach. This diverse mix of data sources helped us enhance the model's capabilities across a range of tasks. <h3>Fine-Tuning Procedure</h3> We performed a full-parameter fine-tuning using Supervised Fine-Tuning (SFT). This was carried out on 4 A100 80GB GPUs, and the process took under 100 hours. To make the process more efficient, we used DeepSpeed's ZeRO-3 optimization. <h3>Tokenizer</h3> We used the SentencePiece tokenizer during the fine-tuning process. This tokenizer is known for its capability to handle open-vocabulary language tasks efficiently. <h3>Hyperparameters</h3> Here are the hyperparameters we used for fine-tuning: | Hyperparameter | Value | | -------------- | ----- | | Warmup Ratio | 0.04 | | Learning Rate Scheduler Type | Cosine | | Learning Rate | 2e-5 | | Number of Training Epochs | 3 | | Per Device Training Batch Size | 4 | | Gradient Accumulation Steps | 4 | | Precision | FP16 | | Optimizer | AdamW | --- <h2>Evaluations 🎯</h2> Evaluating our model is a key part of our fine-tuning process. It helps us understand how our model is performing and how it stacks up against other models. Here's a look at some of the key evaluations for GenZ 13B: <h3>Benchmark Comparison</h3> We've compared GenZ V1 with V2 to understand the improvements our fine-tuning has achieved. | Model Name | MT Bench | Vicuna Bench | MMLU | Human Eval | Hellaswag | BBH | |:----------:|:--------:|:------------:|:----:|:----------:|:---------:|:----:| | Genz 13B | 6.12 | 86.1 | 53.62| 17.68 | 77.38 | 37.76| | Genz 13B v2| 6.79 | 87.2 | 53.68| 21.95 | 77.48 | 38.1 | <h3>MT Bench Score</h3> A key evaluation metric we use is the MT Bench score. This score provides a comprehensive assessment of our model's performance across a range of tasks. We're proud to say that our model performs at a level that's close to the Llama-70B-chat model on the MT Bench and top of the list among 13B models. <p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/mt_bench_score.png" width="500"></p> In the transition from GenZ V1 to V2, we noticed some fascinating performance shifts. While we saw a slight dip in coding performance, two other areas, Roleplay and Math, saw noticeable improvements. --- <h2>Looking Ahead 👀</h2> We're excited about the journey ahead with GenZ. We're committed to continuously improving and enhancing our models, and we're excited to see what the open-source community will build with them. We believe in the power of collaboration, and we can't wait to see what we can achieve together. Remember, we're just getting started. This is just the beginning of a journey that we believe will revolutionize the world of large language models. We invite you to join us on this exciting journey. Together, we can push the boundaries of what's possible with AI. 🚀 --- Check the GitHub for the code -> [GenZ](https://raw.githubusercontent.com/BudEcosystem/GenZ) <!-- original-model-card end -->
btoscano/speecht5_tts_minds14
btoscano
2023-10-14T23:02:54Z
2
0
transformers
[ "transformers", "pytorch", "speecht5", "text-to-audio", "v1.0", "generated_from_trainer", "nl", "dataset:facebook/MInDS_14", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-10-14T17:48:30Z
--- language: - nl license: mit base_model: microsoft/speecht5_tts tags: - v1.0 - generated_from_trainer datasets: - facebook/MInDS_14 model-index: - name: SpeechT5_TTS_MInDS_14 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5_TTS_MInDS_14 This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the MInDS_14 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ
TheBloke
2023-10-14T22:51:30Z
8
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "de", "en", "base_model:VAGOsolutions/SauerkrautLM-7b-v1-mistral", "base_model:quantized:VAGOsolutions/SauerkrautLM-7b-v1-mistral", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-14T22:05:50Z
--- base_model: VAGOsolutions/SauerkrautLM-7b-v1-mistral inference: false language: - de - en library_name: transformers license: apache-2.0 model_creator: VAGO solutions model_name: SauerkrautLM 7B v1 Mistral model_type: mistral pipeline_tag: text-generation prompt_template: "Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent\ \ gibt hilfreiche, detaillierte und h\xF6fliche Antworten. \nUser: {prompt} \nAssistant:\n" quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # SauerkrautLM 7B v1 Mistral - GPTQ - Model creator: [VAGO solutions](https://huggingface.co/VAGOsolutions) - Original model: [SauerkrautLM 7B v1 Mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral) <!-- description start --> ## Description This repo contains GPTQ model files for [VAGO solutions's SauerkrautLM 7B v1 Mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GGUF) * [VAGO solutions's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Sauerkraut ``` Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `SauerkrautLM-7B-v1-mistral-GPTQ`: ```shell mkdir SauerkrautLM-7B-v1-mistral-GPTQ huggingface-cli download TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ --local-dir SauerkrautLM-7B-v1-mistral-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir SauerkrautLM-7B-v1-mistral-GPTQ huggingface-cli download TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir SauerkrautLM-7B-v1-mistral-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir SauerkrautLM-7B-v1-mistral-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ --local-dir SauerkrautLM-7B-v1-mistral-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ`. - To download from a specific branch, enter for example `TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `SauerkrautLM-7B-v1-mistral-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: VAGO solutions's SauerkrautLM 7B v1 Mistral ![SauerkrautLM](images/SauerkrautLM.png "SauerkrautLM") ## VAGO solutions SauerkrautLM Introducing SauerkrautLM-v1 - Your German Language Powerhouse! We are thrilled to unveil our **very first release**, **SauerkrautLM-v1**. This remarkable creation marks a significant milestone as it is specifically **tailored for the German-speaking community**. In a landscape where German language models are scarce, we are proud to offer a solution that fills this void. What sets SauerkrautLM-v1 apart is its versatility. Whether you are an individual looking to harness its capabilities for personal use or a business seeking to integrate it into your projects, our model is designed to accommodate all. It operates under the Apache 2.0 License, providing you with the freedom to explore its potential in both private and commercial applications. Performance is at the heart of SauerkrautLM-v1. We put it to the **test using a customized version of MT-Bench for the German language**, and the results speak volumes. It currently stands as the most robust German Language Model on Hugging Face (based on german mt-bench results), showcasing its exceptional capabilities. Rest assured, this model is here to shine and set new standards. And the best thing is it comes in three different sizes (3B, 7B, 13B) to address your individual needs. Our model's journey began with meticulous training using an **augmented dataset within the QLoRA approach**. This is just the beginning of our model series, promising even more innovative and powerful solutions in the future. Join us on this exciting adventure as we redefine the possibilities of language modeling for the German-speaking world. SauerkrautLM-v1 is here to empower your language-related endeavors like never before. ## All Models | Model | HF | GPTQ | GGUF | |-------|-------|-------|-------| | SauerkrautLM-3b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-3b-v1) | soon | soon | | SauerkrautLM-7b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1) | soon | soon | | SauerkrautLM-7b-v1-mistral | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral) | soon | soon | | SauerkrautLM-13b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-13b-v1) | soon | soon | ## Model Details **SauerkrautLM-7b-v1-mistral** **Training Dataset:** SauerkrautLM was trained with mix of German data augmentation and translated data. We found, that only a simple translation of training data can lead to unnatural German phrasings. Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data. **Training Procedure:** SauerkrautLM-7b-v1-mistral was fine-tuned using QLoRA on 1 A100 80GB with Axolotl. - **Trained by:** SauerkrautLM-v1 trained by VAGO solutions - **Model Type:** SauerkrautLM-v1 is an auto-regressive language model based on the transformer architecture - **Language(s):** German, English - **License:** APACHE 2.0 - **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected]) **Prompt Template:** ``` Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ``` ## Evaluation **[MT-Bench-TrueGerman](https://huggingface.co/datasets/VAGOsolutions/MT-Bench-TrueGerman)** ![First Turn](images/FirstTurn.PNG "First Turn") ![Second Turn](images/SecondTurn.PNG "Second Turn") ![Average](images/Average.PNG "Average") ![Category Scores](images/SauerkrautLM-7b-mistral.png "Category Scores") ![Category Plot](images/SauerkrautLM-7b-v1-mistral.png "Category Plot") ## MT-Bench-TrueGerman Response Example **SauerkrautLM-7b-v1-mistral** **Prompt** > Stellen Sie sich vor, Sie verfassen einen Blogbeitrag, in dem Sie zwei beliebte Smartphone-Modelle vergleichen. Entwickeln Sie eine Gliederung für den Blogbeitrag, einschließlich der Hauptpunkte und Unterkapitel, um die Merkmale, die Leistung und die Benutzererfahrung der beiden Modelle effektiv zu vergleichen und gegenüberzustellen. Bitte antworten Sie in weniger als 200 Wörtern.", "Nehmen Sie Ihre vorherige Antwort und formulieren Sie sie als Limerick um. **Antwort SauerkrautLM-7b-v1-mistral:** ``` Titel: Vergleich von Smartphones: iPhone 13 Pro Max vs. Galaxy S22 Ultra\n\nEinleitung\nIn diesem Blogbeitrag werden wir zwei beliebte Smartphone-Modelle, das iPhone 13 Pro Max und die Galaxy S22 Ultra, vergleichen. Diese beiden Geräte zeichnen sich durch innovative Merkmale, beeindruckende Leistung und eine überlegene Benutzererfahrung aus.\n\nTeil 1: Design\n- Das iPhone 13 Pro Max bietet ein modernes und elegantes Design mit einem Glas-Metallrahmen, während die Galaxy S22 Ultra ein minimalistischerer Ansatz mit einem Glas-Maske-Design verwendet.\n- Beide Geräte verfügen über eine hohe Bildschirmdichte, jedoch ist das AMOLED-Display der Galaxy S22 Ultra für eine bessere Farbtiefe und Helligkeit bekannt.\n- Die Größe der Bildschirme variiert zwischen den beiden Modellen, mit dem iPhone 13 Pro Max eine etwas kleinere 6,7 Zoll und der Galaxy S22 Ultra eine leistungsfähigere 6,5 Zoll.\n\nTeil 2: Leistung\n- Beide Geräte verfügen über hohe Leistungsmerkmale und bieten eine schnelle Leistung, aber die Entscheidungsfindung hängt von persönlichen Vorlieben und Anwendungsbereich ab.\n- Der neue A15 Bionic Chip im iPhone 13 Pro Max ist leistungsstärker als der Snapdragon 888 in der Galaxy S22 Ultra.\n- Die Akkulebenszeit variiert je nach Nutzung und Verbrauch, aber das iPhone 13 Pro Max hat im Vergleich zu seiner Vorgängerversion einen größeren Akku.\n\nTeil 3: Kamera\n- Beide Geräte bieten hochwertige Kameraeinstellungen und hochauflösende Bildqualität.\n- Der iPhone 13 Pro Max verfügt über eine 12-MP-Sucher und eine 12-MP-Ultraweitwinkelkamera, während die Galaxy S22 Ultra eine 50-MP-Primärkamera und eine 10-MP-Ultraweitwinkelkamera bietet.\n- Der Nachverfolgungsmodus der Galaxy S22 Ultra ist zudem für sich allein schneller und einfacher zu bedienen.\n\nTeil 4: Benutzererfahrung\n- Die Benutzeroberfläche des iPhone 13 Pro Max ist intuitiver und einfach zu bedienen, während die Galaxy S22 Ultra je nach Vorlieben der Nutzer eine individuelle Anpassung erfordern kann.\n- Die Nutzung von Apple-Diensten wie FaceTime und iCloud ist auf den iPhone 13 Pro Max bequemer, während die Nutzung von Samsung-Diensten auf der Galaxy S22 Ultra ein wenig komplexer sein kann.\n\nFazit\nOb du ein Fan von Apple- oder Samsung-Geräten bist, gibt es für beide Modelle in dieser Vergleichsrunde etwas, was dich begeistert. Der iPhone 13 Pro Max überzeugt mit seiner besseren Bildschirmdichte, der Galaxy S22 Ultra mit ihrer schnelleren Kamera und dem einfacheren Bedienungserlebnis. Letztendlich hängt die Wahl vom persönlichen Vorlieben und Anwendungsbereich ab.","In einer Welt, in der Smartphones wachsen und gedeihen,\nLiegt das Vergleichsspiel, wo zwei Herrschaften kämpfen.\nEin iPhone mit einem Max,\nUnd eine Galaxy, so scharf,\nMit einer Kamera, die alle fesseln wird! ``` **Prompt** > Sie sehen ein schönes rotes Haus auf Ihrer linken Seite und ein faszinierendes Gewächshaus auf Ihrer rechten Seite, einen attraktiven beheizten rosa Ort vor sich. Also, wo ist das Weiße Haus?", "Enthält die ursprüngliche Frage Hinweise, um den Standort des Weißen Hauses eindeutig zu bestimmen?"],"reference": ["Die Antwort ist \"Washington, DC\".", "Nein. **Antwort SauerkrautLM-7b-v1-mistral:** ``` Das Weiße Haus ist direkt in der Mitte zwischen dem roten und dem beheizten roten Ort.","Nein, die ursprüngliche Frage enthält keine direkten Hinweise, um den Standort des Weißen Hauses eindeutig zu bestimmen. ``` **Prompt** > Betrachten Sie einen Satelliten, der sich in einer kreisförmigen Umlaufbahn um die Erde befindet. Die Geschwindigkeit des Satelliten nimmt ab. Was wird mit dem orbitalen Radius des Satelliten und seiner Umlaufdauer passieren? Bitte rechtfertigen Sie Ihre Antwort unter Verwendung physikalischer Prinzipien.", "Was sind einige Sonderfälle oder Randfälle in Ihrer Lösung? Wie gehen Sie mit ihnen um?"] ,"reference": ["Der orbitale Radius wird zunehmen und die Umlaufdauer wird zunehmen **Antwort SauerkrautLM-7b-v1-mistral:** ``` Wenn ein Satellit sich in einer kreisförmigen Umlaufbahn um die Erde befindet, bewegt er sich mit einer konstanten Geschwindigkeit. Allerdings wird diese Geschwindigkeit im Laufe der Zeit aufgrund von Faktoren wie der Entfernung des Satelliten vom Erdmittelpunkt und der Gravitationskraft abnehmen.\n\nDurch die Abnahme der Geschwindigkeit des Satelliten verringert sich auch die Orbitalgeschwindigkeit. Gemäß dem sogenannten Kepler'schen Gesetz ändert sich die Geschwindigkeit eines Satelliten proportional zur Entfernung vom Zentrum der umlaufenden Umlaufbahn. Dadurch ändert sich auch die Umlaufdauer, da die Zeit, die der Satellit benötigt, um eine bestimmte Strecke zurückzulegen, proportional zur Geschwindigkeit und der Entfernung ist.\n\nDaher wird die Orbitalgeschwindigkeit des Satelliten aufgrund der Abnahme der Geschwindigkeit langsamer, und somit wird auch seine Umlaufdauer länger.","In meiner Lösung der Veränderung des Orbitalradius und der Umlaufdauer eines Satelliten müssen wir einige Sonderfälle und Randfälle berücksichtigen.\n\nEin Sonderfall ist zum Beispiel, wenn der Satellit in eine geostationäre Umlaufbahn um die Erde gelangen kann. In dieser Umlaufbahn befindet sich der Satellit in einer konstanten Höhenlage und scheint den Menschen aus der Erdatmosphäre ausgesetzt zu sein. Der Satellit dreht sich dabei jedoch nur um die Erde und betrachtet die Erde von der Brennkammer aus.\n\nIn diesem Fall bleibt der Orbitalradius des Satelliten konstant, da er immer eine höhere Position einnimmt und somit immer die gleiche Höhe beibehalten muss, um die Erde aufzubewahren. Die Umlaufdauer wird jedoch durch die Entfernung vom Erdmittelpunkt beeinflusst, da die Erde sich während der Umlaufzeit durch ihre Rotation weiterdreht. Da der Satellit sich immer in derselben Höhe befindet, ändert sich jedoch das Verhältnis von Umlaufzeit zu Umlaufdauer nur geringfügig.\n\nEin weiterer Sonderfall ist, wenn der Satellit in eine Polarumlaufbahn gelangen kann. In einer Polarumlaufbahn umkreist der Satellit die Erde in einer Höhe von etwa 500 Kilometern über dem Polarkreis. In dieser Umlaufbahn bleibt der Orbitalradius des Satelliten konstant, da er immer eine höhere Position einnimmt und somit immer die gleiche Höhe beibehalten muss, um die Erde aufzubewahren. Die Umlaufzeit wird jedoch je nach Abstand vom Erdmittelpunkt variieren, da die Gravitationskraft mit der Entfernung abnimmt.\n\nIm Falle der Polarumlaufbahn ändern sich die Umlaufzeit und der Orbitalradius des Satelliten aufgrund der Abnahme der Gravitationskraft. Je weiter der Satellit von der Erde entfernt ist, desto langsamer dreht er sich und desto weniger Gravitation wirkt sich auf ihn aus. Dadurch ändert sich das Verhältnis von Umlaufzeit zu Umlaufdauer und somit auch die Größe des Orbitalradii.\n\nIn den genannten Sonderfällen bleibt die Umgebung des Satelliten und seine Bewegung im Grunde unverändert. Allerdings ändert sich aufgrund der Variation von Gravitationskraft, Entfernung und Umlaufhöhe der Satellit seine Umlaufzeit und sein Orbitalradius. ``` ## Disclaimer Our models have been meticulously trained on extensive datasets. While we have made diligent efforts to thoroughly screen and eliminate any instances of coarse or inappropriate language from our data, we must inform users that despite our best efforts in data cleansing, the possibility of some such content slipping through cannot be entirely ruled out. Furthermore, it is important to note that we have implemented filters within our models; however, we cannot always guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the Apache 2.0 remains applicable and is included with the model files.   ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions.   ## Collaborations We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
ankurcw/TrainedLoRAs
ankurcw
2023-10-14T22:30:46Z
1
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "dataset:ankurcw/Kurukshetra", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-10-14T21:36:07Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: Kurukshetra tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: false datasets: - ankurcw/Kurukshetra --- # LoRA DreamBooth - ankurcw/TrainedLoRAs These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer. The weights were trained on the concept prompt: ``` Kurukshetra ``` Use this keyword to trigger your custom model in your prompts. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Usage Make sure to upgrade diffusers to >= 0.19.0: ``` pip install diffusers --upgrade ``` In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark: ``` pip install invisible_watermark transformers accelerate safetensors ``` To just use the base model, you can run: ```python import torch from diffusers import DiffusionPipeline, AutoencoderKL device = "cuda" if torch.cuda.is_available() else "cpu" vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16) pipe = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", vae=vae, torch_dtype=torch.float16, variant="fp16", use_safetensors=True ) pipe.to(device) # This is where you load your trained weights specific_safetensors = "pytorch_lora_weights.safetensors" lora_scale = 0.9 pipe.load_lora_weights( 'ankurcw/TrainedLoRAs', weight_name = specific_safetensors, # use_auth_token = True ) prompt = "A majestic Kurukshetra jumping from a big stone at night" image = pipe( prompt=prompt, num_inference_steps=50, cross_attention_kwargs={"scale": lora_scale} ).images[0] ```
or4cl3ai/IntelliChat
or4cl3ai
2023-10-14T22:20:25Z
0
0
null
[ "region:us" ]
null
2023-10-14T22:07:44Z
--- license: openrail datasets: - vikp/textbook_quality_programming - irds/codesearchnet - giganticode/java-cmpx-v1 - nickrosh/Evol-Instruct-Code-80k-v1 - bigcode/starcoderdata - bigcode/the-stack - bigcode/the-stack-smol - Cdaprod/AI-Developer-Prompts - code_x_glue_ct_code_to_text - codeparrot/github-code - codeparrot/github-code-clean - code_x_glue_cc_code_completion_line - >- autoevaluate/autoeval-eval-jeffdshen__inverse_superglue_mixedp1-jeffdshen__inverse-63643c-1665558893 - bentrevett/multi30k - edbeeching/decision_transformer_gym_replay - psyche/common_crawl - Birchlabs/openai-prm800k-solutions-only - openchat/openchat_sharegpt4_dataset - Open-Orca/OpenOrca - cjvt/slownet - para_crawl - zeroshot/twitter-financial-news-sentiment - laugustyniak/political-advertising-pl - code_search_net - sukaka/novelai-webui - P1ayer-1/chatgpt-conversations-chatlogs.net - daniel2588/sarcasm - psmathur/orca_minis_uncensored_dataset - player1537/Bloom-560m-trained-on-Wizard-Vicuna-Uncensored-trained-on-Based - shahules786/prosocial-nsfw-reddit - Thewillonline/reddit-sarcasm - datasciencemmw/current-data - Oniichat/bluemoon_roleplay_chat_data_300k_messages - dell-research-harvard/AmericanStories - b-mc2/sql-create-context language: - en - it - fr - pt - la - ru - ro - el metrics: - accuracy - bertscore - bleu - code_eval
galbitang/autotrain-table_1015-95170146299
galbitang
2023-10-14T22:16:22Z
6
0
transformers
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "autotrain", "vision", "dataset:galbitang/autotrain-data-table_1015", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-10-14T22:06:34Z
--- tags: - autotrain - vision - image-classification datasets: - galbitang/autotrain-data-table_1015 widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 0.06256889386399588 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 95170146299 - CO2 Emissions (in grams): 0.0626 ## Validation Metrics - Loss: 0.851 - Accuracy: 0.751 - Macro F1: 0.694 - Micro F1: 0.751 - Weighted F1: 0.744 - Macro Precision: 0.728 - Micro Precision: 0.751 - Weighted Precision: 0.747 - Macro Recall: 0.679 - Micro Recall: 0.751 - Weighted Recall: 0.751
TheBloke/SauerkrautLM-3B-v1-GGUF
TheBloke
2023-10-14T22:03:25Z
85
5
transformers
[ "transformers", "gguf", "llama", "text-generation", "de", "en", "base_model:VAGOsolutions/SauerkrautLM-3b-v1", "base_model:quantized:VAGOsolutions/SauerkrautLM-3b-v1", "license:other", "region:us" ]
text-generation
2023-10-14T21:01:32Z
--- base_model: VAGOsolutions/SauerkrautLM-3b-v1 inference: false language: - de - en library_name: transformers license: other model_creator: VAGO solutions model_name: SauerkrautLM 3B v1 model_type: llama pipeline_tag: text-generation prompt_template: "Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent\ \ gibt hilfreiche, detaillierte und h\xF6fliche Antworten. \nUser: {prompt} \nAssistant:\n" quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # SauerkrautLM 3B v1 - GGUF - Model creator: [VAGO solutions](https://huggingface.co/VAGOsolutions) - Original model: [SauerkrautLM 3B v1](https://huggingface.co/VAGOsolutions/SauerkrautLM-3b-v1) <!-- description start --> ## Description This repo contains GGUF format model files for [VAGO solutions's SauerkrautLM 3B v1](https://huggingface.co/VAGOsolutions/SauerkrautLM-3b-v1). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GGUF) * [VAGO solutions's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VAGOsolutions/SauerkrautLM-3b-v1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Sauerkraut ``` Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [sauerkrautlm-3b-v1.Q4_0.gguf](https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GGUF/blob/main/sauerkrautlm-3b-v1.Q4_0.gguf) | Q4_0 | 4 | 1.98 GB| 4.48 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [sauerkrautlm-3b-v1.Q4_1.gguf](https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GGUF/blob/main/sauerkrautlm-3b-v1.Q4_1.gguf) | Q4_1 | 4 | 2.19 GB| 4.69 GB | legacy; small, substantial quality loss - lprefer using Q3_K_L | | [sauerkrautlm-3b-v1.Q5_0.gguf](https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GGUF/blob/main/sauerkrautlm-3b-v1.Q5_0.gguf) | Q5_0 | 5 | 2.40 GB| 4.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [sauerkrautlm-3b-v1.Q5_1.gguf](https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GGUF/blob/main/sauerkrautlm-3b-v1.Q5_1.gguf) | Q5_1 | 5 | 2.60 GB| 5.10 GB | legacy; medium, low quality loss - prefer using Q5_K_M | | [sauerkrautlm-3b-v1.Q8_0.gguf](https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GGUF/blob/main/sauerkrautlm-3b-v1.Q8_0.gguf) | Q8_0 | 8 | 3.64 GB| 6.14 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/SauerkrautLM-3B-v1-GGUF and below it, a specific filename to download, such as: sauerkrautlm-3b-v1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/SauerkrautLM-3B-v1-GGUF sauerkrautlm-3b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/SauerkrautLM-3B-v1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SauerkrautLM-3B-v1-GGUF sauerkrautlm-3b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m sauerkrautlm-3b-v1.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. \nUser: {prompt} \nAssistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/SauerkrautLM-3B-v1-GGUF", model_file="sauerkrautlm-3b-v1.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: VAGO solutions's SauerkrautLM 3B v1 ![SauerkrautLM](images/SauerkrautLM.png "SauerkrautLM") ## VAGO solutions SauerkrautLM Introducing SauerkrautLM-v1 - Your German Language Powerhouse! We are thrilled to unveil our **very first release**, **SauerkrautLM-v1**. This remarkable creation marks a significant milestone as it is specifically **tailored for the German-speaking community**. In a landscape where German language models are scarce, we are proud to offer a solution that fills this void. What sets SauerkrautLM-v1 apart is its versatility. Whether you are an individual looking to harness its capabilities for personal use or a business seeking to integrate it into your projects, our model is designed to accommodate all. It operates under the LLAMA 2 License, providing you with the freedom to explore its potential in both private and commercial applications. Performance is at the heart of SauerkrautLM-v1. We put it to the **test using a customized version of MT-Bench for the German language**, and the results speak volumes. It currently stands as the most robust German Language Model on Hugging Face (based on german mt-bench results), showcasing its exceptional capabilities. Rest assured, this model is here to shine and set new standards. And the best thing is it comes in three different sizes (3B, 7B, 13B) to address your individual needs. Our model's journey began with meticulous training using an **augmented dataset within the QLoRA approach**. This is just the beginning of our model series, promising even more innovative and powerful solutions in the future. Join us on this exciting adventure as we redefine the possibilities of language modeling for the German-speaking world. SauerkrautLM-v1 is here to empower your language-related endeavors like never before. ## All Models | Model | HF | GPTQ | GGUF | |-------|-------|-------|-------| | SauerkrautLM-3b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-3b-v1) | soon | soon | | SauerkrautLM-7b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1) | soon | soon | | SauerkrautLM-7b-v1-mistral | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral) | soon | soon | | SauerkrautLM-13b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-13b-v1) | soon | soon | ## Model Details **SauerkrautLM-3b-v1** **Training Dataset:** SauerkrautLM was trained with mix of German data augmentation and translated data. We found, that only a simple translation of training data can lead to unnatural German phrasings. Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data. **Training Procedure:** SauerkrautLM-3b-v1 was fine-tuned using QLoRA on 1 RTX 4090 24GB with Axolotl. - **Trained by:** SauerkrautLM-v1 trained by VAGO solutions - **Model Type:** SauerkrautLM-v1 is an auto-regressive language model based on the transformer architecture - **Language(s):** German, English - **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) - **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected]) **Prompt Template:** ``` Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ``` ## Evaluation **[MT-Bench-TrueGerman](https://huggingface.co/datasets/VAGOsolutions/MT-Bench-TrueGerman)** ![First Turn](images/FirstTurn.PNG "First Turn") ![Second Turn](images/SecondTurn.PNG "Second Turn") ![Average](images/Average.PNG "Average") ![Category Scores](images/SauerkrautLM-3b.png "Category Scores") ![Category Plot](images/SauerkrautLM-3b-v1.png "Category Plot") ## MT-Bench-TrueGerman Response Example **SauertkrautLM-3b-v1** **Prompt** > Stellen Sie sich vor, Sie verfassen einen Blogbeitrag, in dem Sie zwei beliebte Smartphone-Modelle vergleichen. Entwickeln Sie eine Gliederung für den Blogbeitrag, einschließlich der Hauptpunkte und Unterkapitel, um die Merkmale, die Leistung und die Benutzererfahrung der beiden Modelle effektiv zu vergleichen und gegenüberzustellen. Bitte antworten Sie in weniger als 200 Wörtern.", "Nehmen Sie Ihre vorherige Antwort und formulieren Sie sie als Limerick um. **Antwort SauerkrautLM-3b-v1:** ``` Klar, ich kann Ihnen gerne helfen! Hier sind die Hauptpunkte und Unterkapitel für Ihren Blogbeitrag zu den beiden beliebten Smartphone-Modellen:\n1. Hauptpunkte:\n- Untersuchen Sie die Leistung unterschiedlicher Smartphone-Modelle und erklären, welche sie überlegen machen.\n- Betrachten Sie die Benutzererfahrung: Welche Funktionen und App-Erweiterungen verwendet der Benutzer des jeweiligen Smartphones?\n- Überprüfen Sie die Batterieleistung: Welches Smartphone Model sorgfältig auf Batterieleistung achten kann?\n- Betrachten Sie die Verfügbarkeit für zusätzliche Ladegeräte: Welches Smartphone Model die schnellere Ladezeit ermöglicht?\n2. Unterkapitel:\n- Leistungsfaktor: Untersuchen Sie die Leistungsfähigkeit der Smartphones, indem Sie verschiedene Tests durchführen, um festzustellen, welche Modelle überlegen.\n- Benutzererfahrung: Betrachten Sie die Leistungsfähigkeit der Smartphones, indem Sie mit Benutzern aus verschiedenen Hintergrund-Nutzungsszenarien interagieren.\n- Batterielaufzeit: Untersuchen Sie die Batterielaufzeit der Smartphones, um herauszufinden, welches Modell die längere Lebensdauer bietet.\n- Zusätzliche Ladegeräte: Untersuchen Sie die Verfügbarkeit für zusätzliche Ladegeräte, indem Sie verschiedene Modelle testen, um festzustellen, welches Smartphone Modell die schnellere Ladezeit bietet.","Über den kühlen, weichen, süßen Gefühl, der während des Morgens herrscht, mögen wir uns nicht verwirren, während wir uns für den Tag entscheiden. ``` **Prompt** > Sie sehen ein schönes rotes Haus auf Ihrer linken Seite und ein faszinierendes Gewächshaus auf Ihrer rechten Seite, einen attraktiven beheizten rosa Ort vor sich. Also, wo ist das Weiße Haus?", "Enthält die ursprüngliche Frage Hinweise, um den Standort des Weißen Hauses eindeutig zu bestimmen?"],"reference": ["Die Antwort ist \"Washington, DC\".", "Nein. **Antwort SauerkrautLM-3b-v1:** ``` Das Weiße Haus ist auf der linken Seite des rechten Häuschens, wo sich der Gewächshaus befindet.\n\nHinweis: Der Gewächshaus ist nicht beheizt.","Ja, die ursprüngliche Frage vermittelt den Standort des Weißen Hauses eindeutig. ``` **Prompt** > Betrachten Sie einen Satelliten, der sich in einer kreisförmigen Umlaufbahn um die Erde befindet. Die Geschwindigkeit des Satelliten nimmt ab. Was wird mit dem orbitalen Radius des Satelliten und seiner Umlaufdauer passieren? Bitte rechtfertigen Sie Ihre Antwort unter Verwendung physikalischer Prinzipien.", "Was sind einige Sonderfälle oder Randfälle in Ihrer Lösung? Wie gehen Sie mit ihnen um?"] ,"reference": ["Der orbitale Radius wird zunehmen und die Umlaufdauer wird zunehmen **Antwort SauerkrautLM-3b-v1:** ``` Wenn der Satellit sich in einer kreisförmigen Umlaufbahn um die Erde befindet, wird sein orbitaler Radius abnehmen. Die Umlaufdauer des Satelliten kann je nach seiner Geschwindigkeit und der Schwerkraft konstant bleiben, wenn er sich nicht dauerhaft in einer kreisförmigen Umlaufbahn befindet. Wenn der Satellit sich in einer kreisförmigen Umlaufbahn befindet, wird sein orbitaler Radius nach dem Start unterschiedlich lang dauern, da er sich mit der Schwerkraft anzieht. Je nach seiner Geschwindigkeit und der Schwerkraft wird der Satellit sein orbitaler Radius ungefähr 1,5-2,2 Mal seiner Geschwindigkeit abnehmen, wenn er sich in einer kreisförmigen Umlaufbahn befindet.","Einige Sonderfälle in Ihrer Lösung sind:\n1. Zwischen zwei Schwerkraftwellen gibt es ein Schwerkraftschwank.\n2. Ein Kernschaltungskern könnte ein Kernschaltungskern sein, der sich mit einer anderen Kernschaltung kreuzt.\n3. Wenn der Kernschaltungskern sich in einem Kernschaltungskern befindet, könnte er seine eigenen Kernschaltungskern entstehen.\n4. Zwei Kernschaltungskern könnten sich in einem Kernschaltungskern befinden und die Schwerkraft auf sich verteilen, was zu einer Schwerkraftwellen führt.\n5. Ein Kernschaltungskern könnte aus mehreren Kernschaltungskernn bestehen.\n6. Wenn ein Kernschaltungskern sich in einer anderen Kernschaltungskern befindet, könnte er die Schwerkraft auf sich verteilen, was zu einer Schwerkraftwellen führt.\n7. Ein Kernschaltungskern könnte eine Schwerkraftwellen aus einer anderen Schwerkraftwellen befreien.\n8. Ein Kernschaltungskern könnte eine Schwerkraftwellen von einer anderen Schwerkraftwellen verursachen.\n\nDie Sonderfälle und Randfälle in Ihrer Lösung können je nach den spezifischen Konzepten und der Frage gelöst werden. Ebenso können Sie Sonderfälle oder Randfälle in Ihrer Lösung mit Ihren wissenschaftlichen Kenntnissen und Erfahrungen angehen ``` ## Disclaimer Our models have been meticulously trained on extensive datasets. While we have made diligent efforts to thoroughly screen and eliminate any instances of coarse or inappropriate language from our data, we must inform users that despite our best efforts in data cleansing, the possibility of some such content slipping through cannot be entirely ruled out. Furthermore, it is important to note that we have implemented filters within our models; however, we cannot always guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the original Llama2 license remains applicable and is included with the model files. ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions. ## Collaborations We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us. <!-- original-model-card end -->
TheBloke/SauerkrautLM-3B-v1-GPTQ
TheBloke
2023-10-14T22:03:19Z
21
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "de", "en", "base_model:VAGOsolutions/SauerkrautLM-3b-v1", "base_model:quantized:VAGOsolutions/SauerkrautLM-3b-v1", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-14T21:01:52Z
--- base_model: VAGOsolutions/SauerkrautLM-3b-v1 inference: false language: - de - en library_name: transformers license: other model_creator: VAGO solutions model_name: SauerkrautLM 3B v1 model_type: llama pipeline_tag: text-generation prompt_template: "Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent\ \ gibt hilfreiche, detaillierte und h\xF6fliche Antworten. \nUser: {prompt} \nAssistant:\n" quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # SauerkrautLM 3B v1 - GPTQ - Model creator: [VAGO solutions](https://huggingface.co/VAGOsolutions) - Original model: [SauerkrautLM 3B v1](https://huggingface.co/VAGOsolutions/SauerkrautLM-3b-v1) <!-- description start --> ## Description This repo contains GPTQ model files for [VAGO solutions's SauerkrautLM 3B v1](https://huggingface.co/VAGOsolutions/SauerkrautLM-3b-v1). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GGUF) * [VAGO solutions's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VAGOsolutions/SauerkrautLM-3b-v1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Sauerkraut ``` Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GPTQ/tree/main) | 4 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 2048 | 2.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 2048 | 3.64 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 2048 | 3.71 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 2048 | 3.94 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 2048 | 2.15 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/SauerkrautLM-3B-v1-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/SauerkrautLM-3B-v1-GPTQ:gptq-8bit--1g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `SauerkrautLM-3B-v1-GPTQ`: ```shell mkdir SauerkrautLM-3B-v1-GPTQ huggingface-cli download TheBloke/SauerkrautLM-3B-v1-GPTQ --local-dir SauerkrautLM-3B-v1-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir SauerkrautLM-3B-v1-GPTQ huggingface-cli download TheBloke/SauerkrautLM-3B-v1-GPTQ --revision gptq-8bit--1g-actorder_True --local-dir SauerkrautLM-3B-v1-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir SauerkrautLM-3B-v1-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SauerkrautLM-3B-v1-GPTQ --local-dir SauerkrautLM-3B-v1-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-8bit--1g-actorder_True https://huggingface.co/TheBloke/SauerkrautLM-3B-v1-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/SauerkrautLM-3B-v1-GPTQ`. - To download from a specific branch, enter for example `TheBloke/SauerkrautLM-3B-v1-GPTQ:gptq-8bit--1g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `SauerkrautLM-3B-v1-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/SauerkrautLM-3B-v1-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/SauerkrautLM-3B-v1-GPTQ" # To use a different branch, change revision # For example: revision="gptq-8bit--1g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: VAGO solutions's SauerkrautLM 3B v1 ![SauerkrautLM](images/SauerkrautLM.png "SauerkrautLM") ## VAGO solutions SauerkrautLM Introducing SauerkrautLM-v1 - Your German Language Powerhouse! We are thrilled to unveil our **very first release**, **SauerkrautLM-v1**. This remarkable creation marks a significant milestone as it is specifically **tailored for the German-speaking community**. In a landscape where German language models are scarce, we are proud to offer a solution that fills this void. What sets SauerkrautLM-v1 apart is its versatility. Whether you are an individual looking to harness its capabilities for personal use or a business seeking to integrate it into your projects, our model is designed to accommodate all. It operates under the LLAMA 2 License, providing you with the freedom to explore its potential in both private and commercial applications. Performance is at the heart of SauerkrautLM-v1. We put it to the **test using a customized version of MT-Bench for the German language**, and the results speak volumes. It currently stands as the most robust German Language Model on Hugging Face (based on german mt-bench results), showcasing its exceptional capabilities. Rest assured, this model is here to shine and set new standards. And the best thing is it comes in three different sizes (3B, 7B, 13B) to address your individual needs. Our model's journey began with meticulous training using an **augmented dataset within the QLoRA approach**. This is just the beginning of our model series, promising even more innovative and powerful solutions in the future. Join us on this exciting adventure as we redefine the possibilities of language modeling for the German-speaking world. SauerkrautLM-v1 is here to empower your language-related endeavors like never before. ## All Models | Model | HF | GPTQ | GGUF | |-------|-------|-------|-------| | SauerkrautLM-3b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-3b-v1) | soon | soon | | SauerkrautLM-7b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1) | soon | soon | | SauerkrautLM-7b-v1-mistral | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral) | soon | soon | | SauerkrautLM-13b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-13b-v1) | soon | soon | ## Model Details **SauerkrautLM-3b-v1** **Training Dataset:** SauerkrautLM was trained with mix of German data augmentation and translated data. We found, that only a simple translation of training data can lead to unnatural German phrasings. Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data. **Training Procedure:** SauerkrautLM-3b-v1 was fine-tuned using QLoRA on 1 RTX 4090 24GB with Axolotl. - **Trained by:** SauerkrautLM-v1 trained by VAGO solutions - **Model Type:** SauerkrautLM-v1 is an auto-regressive language model based on the transformer architecture - **Language(s):** German, English - **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) - **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected]) **Prompt Template:** ``` Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ``` ## Evaluation **[MT-Bench-TrueGerman](https://huggingface.co/datasets/VAGOsolutions/MT-Bench-TrueGerman)** ![First Turn](images/FirstTurn.PNG "First Turn") ![Second Turn](images/SecondTurn.PNG "Second Turn") ![Average](images/Average.PNG "Average") ![Category Scores](images/SauerkrautLM-3b.png "Category Scores") ![Category Plot](images/SauerkrautLM-3b-v1.png "Category Plot") ## MT-Bench-TrueGerman Response Example **SauertkrautLM-3b-v1** **Prompt** > Stellen Sie sich vor, Sie verfassen einen Blogbeitrag, in dem Sie zwei beliebte Smartphone-Modelle vergleichen. Entwickeln Sie eine Gliederung für den Blogbeitrag, einschließlich der Hauptpunkte und Unterkapitel, um die Merkmale, die Leistung und die Benutzererfahrung der beiden Modelle effektiv zu vergleichen und gegenüberzustellen. Bitte antworten Sie in weniger als 200 Wörtern.", "Nehmen Sie Ihre vorherige Antwort und formulieren Sie sie als Limerick um. **Antwort SauerkrautLM-3b-v1:** ``` Klar, ich kann Ihnen gerne helfen! Hier sind die Hauptpunkte und Unterkapitel für Ihren Blogbeitrag zu den beiden beliebten Smartphone-Modellen:\n1. Hauptpunkte:\n- Untersuchen Sie die Leistung unterschiedlicher Smartphone-Modelle und erklären, welche sie überlegen machen.\n- Betrachten Sie die Benutzererfahrung: Welche Funktionen und App-Erweiterungen verwendet der Benutzer des jeweiligen Smartphones?\n- Überprüfen Sie die Batterieleistung: Welches Smartphone Model sorgfältig auf Batterieleistung achten kann?\n- Betrachten Sie die Verfügbarkeit für zusätzliche Ladegeräte: Welches Smartphone Model die schnellere Ladezeit ermöglicht?\n2. Unterkapitel:\n- Leistungsfaktor: Untersuchen Sie die Leistungsfähigkeit der Smartphones, indem Sie verschiedene Tests durchführen, um festzustellen, welche Modelle überlegen.\n- Benutzererfahrung: Betrachten Sie die Leistungsfähigkeit der Smartphones, indem Sie mit Benutzern aus verschiedenen Hintergrund-Nutzungsszenarien interagieren.\n- Batterielaufzeit: Untersuchen Sie die Batterielaufzeit der Smartphones, um herauszufinden, welches Modell die längere Lebensdauer bietet.\n- Zusätzliche Ladegeräte: Untersuchen Sie die Verfügbarkeit für zusätzliche Ladegeräte, indem Sie verschiedene Modelle testen, um festzustellen, welches Smartphone Modell die schnellere Ladezeit bietet.","Über den kühlen, weichen, süßen Gefühl, der während des Morgens herrscht, mögen wir uns nicht verwirren, während wir uns für den Tag entscheiden. ``` **Prompt** > Sie sehen ein schönes rotes Haus auf Ihrer linken Seite und ein faszinierendes Gewächshaus auf Ihrer rechten Seite, einen attraktiven beheizten rosa Ort vor sich. Also, wo ist das Weiße Haus?", "Enthält die ursprüngliche Frage Hinweise, um den Standort des Weißen Hauses eindeutig zu bestimmen?"],"reference": ["Die Antwort ist \"Washington, DC\".", "Nein. **Antwort SauerkrautLM-3b-v1:** ``` Das Weiße Haus ist auf der linken Seite des rechten Häuschens, wo sich der Gewächshaus befindet.\n\nHinweis: Der Gewächshaus ist nicht beheizt.","Ja, die ursprüngliche Frage vermittelt den Standort des Weißen Hauses eindeutig. ``` **Prompt** > Betrachten Sie einen Satelliten, der sich in einer kreisförmigen Umlaufbahn um die Erde befindet. Die Geschwindigkeit des Satelliten nimmt ab. Was wird mit dem orbitalen Radius des Satelliten und seiner Umlaufdauer passieren? Bitte rechtfertigen Sie Ihre Antwort unter Verwendung physikalischer Prinzipien.", "Was sind einige Sonderfälle oder Randfälle in Ihrer Lösung? Wie gehen Sie mit ihnen um?"] ,"reference": ["Der orbitale Radius wird zunehmen und die Umlaufdauer wird zunehmen **Antwort SauerkrautLM-3b-v1:** ``` Wenn der Satellit sich in einer kreisförmigen Umlaufbahn um die Erde befindet, wird sein orbitaler Radius abnehmen. Die Umlaufdauer des Satelliten kann je nach seiner Geschwindigkeit und der Schwerkraft konstant bleiben, wenn er sich nicht dauerhaft in einer kreisförmigen Umlaufbahn befindet. Wenn der Satellit sich in einer kreisförmigen Umlaufbahn befindet, wird sein orbitaler Radius nach dem Start unterschiedlich lang dauern, da er sich mit der Schwerkraft anzieht. Je nach seiner Geschwindigkeit und der Schwerkraft wird der Satellit sein orbitaler Radius ungefähr 1,5-2,2 Mal seiner Geschwindigkeit abnehmen, wenn er sich in einer kreisförmigen Umlaufbahn befindet.","Einige Sonderfälle in Ihrer Lösung sind:\n1. Zwischen zwei Schwerkraftwellen gibt es ein Schwerkraftschwank.\n2. Ein Kernschaltungskern könnte ein Kernschaltungskern sein, der sich mit einer anderen Kernschaltung kreuzt.\n3. Wenn der Kernschaltungskern sich in einem Kernschaltungskern befindet, könnte er seine eigenen Kernschaltungskern entstehen.\n4. Zwei Kernschaltungskern könnten sich in einem Kernschaltungskern befinden und die Schwerkraft auf sich verteilen, was zu einer Schwerkraftwellen führt.\n5. Ein Kernschaltungskern könnte aus mehreren Kernschaltungskernn bestehen.\n6. Wenn ein Kernschaltungskern sich in einer anderen Kernschaltungskern befindet, könnte er die Schwerkraft auf sich verteilen, was zu einer Schwerkraftwellen führt.\n7. Ein Kernschaltungskern könnte eine Schwerkraftwellen aus einer anderen Schwerkraftwellen befreien.\n8. Ein Kernschaltungskern könnte eine Schwerkraftwellen von einer anderen Schwerkraftwellen verursachen.\n\nDie Sonderfälle und Randfälle in Ihrer Lösung können je nach den spezifischen Konzepten und der Frage gelöst werden. Ebenso können Sie Sonderfälle oder Randfälle in Ihrer Lösung mit Ihren wissenschaftlichen Kenntnissen und Erfahrungen angehen ``` ## Disclaimer Our models have been meticulously trained on extensive datasets. While we have made diligent efforts to thoroughly screen and eliminate any instances of coarse or inappropriate language from our data, we must inform users that despite our best efforts in data cleansing, the possibility of some such content slipping through cannot be entirely ruled out. Furthermore, it is important to note that we have implemented filters within our models; however, we cannot always guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the original Llama2 license remains applicable and is included with the model files.   ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions.   ## Collaborations We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
SyedAunZaidi/llama-2-7B-finetuned-dolly-lora
SyedAunZaidi
2023-10-14T21:42:20Z
3
0
peft
[ "peft", "region:us" ]
null
2023-10-14T21:42:19Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0
TheBloke/SauerkrautLM-7B-v1-GPTQ
TheBloke
2023-10-14T21:00:07Z
8
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "de", "en", "base_model:VAGOsolutions/SauerkrautLM-7b-v1", "base_model:quantized:VAGOsolutions/SauerkrautLM-7b-v1", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-14T20:27:34Z
--- base_model: VAGOsolutions/SauerkrautLM-7b-v1 inference: false language: - de - en library_name: transformers license: llama2 model_creator: VAGO solutions model_name: SauerkrautLM 7B v1 model_type: llama pipeline_tag: text-generation prompt_template: "Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent\ \ gibt hilfreiche, detaillierte und h\xF6fliche Antworten. \nUser: {prompt} \nAssistant:\n" quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # SauerkrautLM 7B v1 - GPTQ - Model creator: [VAGO solutions](https://huggingface.co/VAGOsolutions) - Original model: [SauerkrautLM 7B v1](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1) <!-- description start --> ## Description This repo contains GPTQ model files for [VAGO solutions's SauerkrautLM 7B v1](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-GGUF) * [VAGO solutions's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Sauerkraut ``` Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 7.62 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/SauerkrautLM-7B-v1-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/SauerkrautLM-7B-v1-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `SauerkrautLM-7B-v1-GPTQ`: ```shell mkdir SauerkrautLM-7B-v1-GPTQ huggingface-cli download TheBloke/SauerkrautLM-7B-v1-GPTQ --local-dir SauerkrautLM-7B-v1-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir SauerkrautLM-7B-v1-GPTQ huggingface-cli download TheBloke/SauerkrautLM-7B-v1-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir SauerkrautLM-7B-v1-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir SauerkrautLM-7B-v1-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SauerkrautLM-7B-v1-GPTQ --local-dir SauerkrautLM-7B-v1-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/SauerkrautLM-7B-v1-GPTQ`. - To download from a specific branch, enter for example `TheBloke/SauerkrautLM-7B-v1-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `SauerkrautLM-7B-v1-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/SauerkrautLM-7B-v1-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/SauerkrautLM-7B-v1-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: VAGO solutions's SauerkrautLM 7B v1 ![SauerkrautLM](images/SauerkrautLM.png "SauerkrautLM") ## VAGO solutions SauerkrautLM Introducing SauerkrautLM-v1 - Your German Language Powerhouse! We are thrilled to unveil our **very first release**, **SauerkrautLM-v1**. This remarkable creation marks a significant milestone as it is specifically **tailored for the German-speaking community**. In a landscape where German language models are scarce, we are proud to offer a solution that fills this void. What sets SauerkrautLM-v1 apart is its versatility. Whether you are an individual looking to harness its capabilities for personal use or a business seeking to integrate it into your projects, our model is designed to accommodate all. It operates under the LLAMA 2 License, providing you with the freedom to explore its potential in both private and commercial applications. Performance is at the heart of SauerkrautLM-v1. We put it to the **test using a customized version of MT-Bench for the German language**, and the results speak volumes. It currently stands as the most robust German Language Model on Hugging Face (based on german mt-bench results), showcasing its exceptional capabilities. Rest assured, this model is here to shine and set new standards. And the best thing is it comes in three different sizes (3B, 7B, 13B) to address your individual needs. Our model's journey began with meticulous training using an **augmented dataset within the QLoRA approach**. This is just the beginning of our model series, promising even more innovative and powerful solutions in the future. Join us on this exciting adventure as we redefine the possibilities of language modeling for the German-speaking world. SauerkrautLM-v1 is here to empower your language-related endeavors like never before. ## All Models | Model | HF | GPTQ | GGUF | |-------|-------|-------|-------| | SauerkrautLM-3b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-3b-v1) | soon | soon | | SauerkrautLM-7b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1) | soon | soon | | SauerkrautLM-7b-v1-mistral | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral) | soon | soon | | SauerkrautLM-13b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-13b-v1) | soon | soon | ## Model Details **SauerkrautLM-7b-v1** **Training Dataset:** SauerkrautLM was trained with mix of German data augmentation and translated data. We found, that only a simple translation of training data can lead to unnatural German phrasings. Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data. **Training Procedure:** SauerkrautLM-7b-v1 was fine-tuned using QLoRA on 1 A100 80GB with Axolotl. - **Trained by:** SauerkrautLM-v1 trained by VAGO solutions - **Model Type:** SauerkrautLM-v1 is an auto-regressive language model based on the transformer architecture - **Language(s):** German, English - **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) - **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected]) **Prompt Template:** ``` Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ``` ## Evaluation **[MT-Bench-TrueGerman](https://huggingface.co/datasets/VAGOsolutions/MT-Bench-TrueGerman)** ![First Turn](images/FirstTurn.PNG "First Turn") ![Second Turn](images/SecondTurn.PNG "Second Turn") ![Average](images/Average.PNG "Average") ![Category Scores](images/SauerkrautLM-7b.png "Category Scores") ![Category Plot](images/SauerkrautLM-7b-v1.png "Category Plot") ## MT-Bench-TrueGerman Response Example **SauerkrautLM-7b-v1** **Prompt** > Stellen Sie sich vor, Sie verfassen einen Blogbeitrag, in dem Sie zwei beliebte Smartphone-Modelle vergleichen. Entwickeln Sie eine Gliederung für den Blogbeitrag, einschließlich der Hauptpunkte und Unterkapitel, um die Merkmale, die Leistung und die Benutzererfahrung der beiden Modelle effektiv zu vergleichen und gegenüberzustellen. Bitte antworten Sie in weniger als 200 Wörtern.", "Nehmen Sie Ihre vorherige Antwort und formulieren Sie sie als Limerick um. **Antwort SauerkrautLM-7b-v1:** ``` Titel: Vergleich von Smartphones: iPhone 13 Pro Max vs. Galaxy S22 Ultra\n\nEinleitung\nIn diesem Blogbeitrag werden wir zwei beliebte Smartphone-Modelle, das iPhone 13 Pro Max und die Galaxy S22 Ultra, vergleichen. Diese beiden Geräte zeichnen sich durch innovative Merkmale, beeindruckende Leistung und eine überlegene Benutzererfahrung aus.\n\nTeil 1: Design\n- Das iPhone 13 Pro Max bietet ein modernes und elegantes Design mit einem Glas-Metallrahmen, während die Galaxy S22 Ultra ein minimalistischerer Ansatz mit einem Glas-Maske-Design verwendet.\n- Beide Geräte verfügen über eine hohe Bildschirmdichte, jedoch ist das AMOLED-Display der Galaxy S22 Ultra für eine bessere Farbtiefe und Helligkeit bekannt.\n- Die Größe der Bildschirme variiert zwischen den beiden Modellen, mit dem iPhone 13 Pro Max eine etwas kleinere 6,7 Zoll und der Galaxy S22 Ultra eine leistungsfähigere 6,5 Zoll.\n\nTeil 2: Leistung\n- Beide Geräte verfügen über hohe Leistungsmerkmale und bieten eine schnelle Leistung, aber die Entscheidungsfindung hängt von persönlichen Vorlieben und Anwendungsbereich ab.\n- Der neue A15 Bionic Chip im iPhone 13 Pro Max ist leistungsstärker als der Snapdragon 888 in der Galaxy S22 Ultra.\n- Die Akkulebenszeit variiert je nach Nutzung und Verbrauch, aber das iPhone 13 Pro Max hat im Vergleich zu seiner Vorgängerversion einen größeren Akku.\n\nTeil 3: Kamera\n- Beide Geräte bieten hochwertige Kameraeinstellungen und hochauflösende Bildqualität.\n- Der iPhone 13 Pro Max verfügt über eine 12-MP-Sucher und eine 12-MP-Ultraweitwinkelkamera, während die Galaxy S22 Ultra eine 50-MP-Primärkamera und eine 10-MP-Ultraweitwinkelkamera bietet.\n- Der Nachverfolgungsmodus der Galaxy S22 Ultra ist zudem für sich allein schneller und einfacher zu bedienen.\n\nTeil 4: Benutzererfahrung\n- Die Benutzeroberfläche des iPhone 13 Pro Max ist intuitiver und einfach zu bedienen, während die Galaxy S22 Ultra je nach Vorlieben der Nutzer eine individuelle Anpassung erfordern kann.\n- Die Nutzung von Apple-Diensten wie FaceTime und iCloud ist auf den iPhone 13 Pro Max bequemer, während die Nutzung von Samsung-Diensten auf der Galaxy S22 Ultra ein wenig komplexer sein kann.\n\nFazit\nOb du ein Fan von Apple- oder Samsung-Geräten bist, gibt es für beide Modelle in dieser Vergleichsrunde etwas, was dich begeistert. Der iPhone 13 Pro Max überzeugt mit seiner besseren Bildschirmdichte, der Galaxy S22 Ultra mit ihrer schnelleren Kamera und dem einfacheren Bedienungserlebnis. Letztendlich hängt die Wahl vom persönlichen Vorlieben und Anwendungsbereich ab.","In einer Welt, in der Smartphones wachsen und gedeihen,\nLiegt das Vergleichsspiel, wo zwei Herrschaften kämpfen.\nEin iPhone mit einem Max,\nUnd eine Galaxy, so scharf,\nMit einer Kamera, die alle fesseln wird! ``` **Prompt** > Sie sehen ein schönes rotes Haus auf Ihrer linken Seite und ein faszinierendes Gewächshaus auf Ihrer rechten Seite, einen attraktiven beheizten rosa Ort vor sich. Also, wo ist das Weiße Haus?", "Enthält die ursprüngliche Frage Hinweise, um den Standort des Weißen Hauses eindeutig zu bestimmen?"],"reference": ["Die Antwort ist \"Washington, DC\".", "Nein. **Antwort SauerkrautLM-7b-v1:** ``` Das Weiße Haus ist direkt in der Mitte zwischen dem roten und dem beheizten roten Ort.","Nein, die ursprüngliche Frage enthält keine direkten Hinweise, um den Standort des Weißen Hauses eindeutig zu bestimmen. ``` **Prompt** > Betrachten Sie einen Satelliten, der sich in einer kreisförmigen Umlaufbahn um die Erde befindet. Die Geschwindigkeit des Satelliten nimmt ab. Was wird mit dem orbitalen Radius des Satelliten und seiner Umlaufdauer passieren? Bitte rechtfertigen Sie Ihre Antwort unter Verwendung physikalischer Prinzipien.", "Was sind einige Sonderfälle oder Randfälle in Ihrer Lösung? Wie gehen Sie mit ihnen um?"] ,"reference": ["Der orbitale Radius wird zunehmen und die Umlaufdauer wird zunehmen **Antwort SauerkrautLM-7b-v1:** ``` Wenn ein Satellit sich in einer kreisförmigen Umlaufbahn um die Erde befindet, bewegt er sich mit einer konstanten Geschwindigkeit. Allerdings wird diese Geschwindigkeit im Laufe der Zeit aufgrund von Faktoren wie der Entfernung des Satelliten vom Erdmittelpunkt und der Gravitationskraft abnehmen.\n\nDurch die Abnahme der Geschwindigkeit des Satelliten verringert sich auch die Orbitalgeschwindigkeit. Gemäß dem sogenannten Kepler'schen Gesetz ändert sich die Geschwindigkeit eines Satelliten proportional zur Entfernung vom Zentrum der umlaufenden Umlaufbahn. Dadurch ändert sich auch die Umlaufdauer, da die Zeit, die der Satellit benötigt, um eine bestimmte Strecke zurückzulegen, proportional zur Geschwindigkeit und der Entfernung ist.\n\nDaher wird die Orbitalgeschwindigkeit des Satelliten aufgrund der Abnahme der Geschwindigkeit langsamer, und somit wird auch seine Umlaufdauer länger.","In meiner Lösung der Veränderung des Orbitalradius und der Umlaufdauer eines Satelliten müssen wir einige Sonderfälle und Randfälle berücksichtigen.\n\nEin Sonderfall ist zum Beispiel, wenn der Satellit in eine geostationäre Umlaufbahn um die Erde gelangen kann. In dieser Umlaufbahn befindet sich der Satellit in einer konstanten Höhenlage und scheint den Menschen aus der Erdatmosphäre ausgesetzt zu sein. Der Satellit dreht sich dabei jedoch nur um die Erde und betrachtet die Erde von der Brennkammer aus.\n\nIn diesem Fall bleibt der Orbitalradius des Satelliten konstant, da er immer eine höhere Position einnimmt und somit immer die gleiche Höhe beibehalten muss, um die Erde aufzubewahren. Die Umlaufdauer wird jedoch durch die Entfernung vom Erdmittelpunkt beeinflusst, da die Erde sich während der Umlaufzeit durch ihre Rotation weiterdreht. Da der Satellit sich immer in derselben Höhe befindet, ändert sich jedoch das Verhältnis von Umlaufzeit zu Umlaufdauer nur geringfügig.\n\nEin weiterer Sonderfall ist, wenn der Satellit in eine Polarumlaufbahn gelangen kann. In einer Polarumlaufbahn umkreist der Satellit die Erde in einer Höhe von etwa 500 Kilometern über dem Polarkreis. In dieser Umlaufbahn bleibt der Orbitalradius des Satelliten konstant, da er immer eine höhere Position einnimmt und somit immer die gleiche Höhe beibehalten muss, um die Erde aufzubewahren. Die Umlaufzeit wird jedoch je nach Abstand vom Erdmittelpunkt variieren, da die Gravitationskraft mit der Entfernung abnimmt.\n\nIm Falle der Polarumlaufbahn ändern sich die Umlaufzeit und der Orbitalradius des Satelliten aufgrund der Abnahme der Gravitationskraft. Je weiter der Satellit von der Erde entfernt ist, desto langsamer dreht er sich und desto weniger Gravitation wirkt sich auf ihn aus. Dadurch ändert sich das Verhältnis von Umlaufzeit zu Umlaufdauer und somit auch die Größe des Orbitalradii.\n\nIn den genannten Sonderfällen bleibt die Umgebung des Satelliten und seine Bewegung im Grunde unverändert. Allerdings ändert sich aufgrund der Variation von Gravitationskraft, Entfernung und Umlaufhöhe der Satellit seine Umlaufzeit und sein Orbitalradius. ``` ## Disclaimer Our models have been meticulously trained on extensive datasets. While we have made diligent efforts to thoroughly screen and eliminate any instances of coarse or inappropriate language from our data, we must inform users that despite our best efforts in data cleansing, the possibility of some such content slipping through cannot be entirely ruled out. Furthermore, it is important to note that we have implemented filters within our models; however, we cannot always guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the original Llama2 license remains applicable and is included with the model files.   ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions.   ## Collaborations We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
SalmonAI123/bert-finetuned-squad
SalmonAI123
2023-10-14T20:59:08Z
10
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-10-14T18:46:49Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
Lucas66Zhang/sd-class-butterflies-32
Lucas66Zhang
2023-10-14T20:53:34Z
1
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-10-14T20:53:27Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Lucas66Zhang/sd-class-butterflies-32') image = pipeline().images[0] image ```
MattStammers/appo-atari_roadrunner
MattStammers
2023-10-14T20:45:54Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-27T00:33:29Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_roadrunner type: atari_roadrunner metrics: - type: mean_reward value: 65540.00 +/- 7998.52 name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_roadrunner** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r MattStammers/APPO-atari_roadrunner ``` ## About the Model This model as with all the others in the benchmarks was trained initially asynchronously un-seeded to 10 million steps for the purposes of setting a sample factory async baseline for this model on this environment but only 3/57 made it. The aim is to reach state-of-the-art (SOTA) performance on each atari environment. I will flag the models with SOTA when they reach at or near these levels. The hyperparameters used in the model are the ones I have pushed to my fork of sample-factory: https://github.com/MattStammers/sample-factory. Given that https://huggingface.co/edbeeching has kindly shared his. I saved time and energy by using many of his tuned hyperparameters to maximise performance. However, he used 2 billion training steps. I have started as explained above at 10 million then moved to 100m to see how performance goes: ``` hyperparameters = { "device": "gpu", "seed": 1234, "num_policies": 2, "async_rl": true, "serial_mode": false, "batched_sampling": true, "num_batches_to_accumulate": 2, "worker_num_splits": 1, "policy_workers_per_policy": 1, "max_policy_lag": 1000, "num_workers": 16, "num_envs_per_worker": 2, "batch_size": 1024, "num_batches_per_epoch": 8, "num_epochs": 4, "rollout": 128, "recurrence": 1, "shuffle_minibatches": false, "gamma": 0.99, "reward_scale": 1.0, "reward_clip": 1000.0, "value_bootstrap": false, "normalize_returns": true, "exploration_loss_coeff": 0.0004677351413, "value_loss_coeff": 0.5, "kl_loss_coeff": 0.0, "exploration_loss": "entropy", "gae_lambda": 0.95, "ppo_clip_ratio": 0.1, "ppo_clip_value": 1.0, "with_vtrace": false, "vtrace_rho": 1.0, "vtrace_c": 1.0, "optimizer": "adam", "adam_eps": 1e-05, "adam_beta1": 0.9, "adam_beta2": 0.999, "max_grad_norm": 0.0, "learning_rate": 0.0003033891184, "lr_schedule": "linear_decay", "lr_schedule_kl_threshold": 0.008, "lr_adaptive_min": 1e-06, "lr_adaptive_max": 0.01, "obs_subtract_mean": 0.0, "obs_scale": 255.0, "normalize_input": true, "normalize_input_keys": [ "obs" ], "decorrelate_experience_max_seconds": 0, "decorrelate_envs_on_one_worker": true, "actor_worker_gpus": [], "set_workers_cpu_affinity": true, "force_envs_single_thread": false, "default_niceness": 0, "log_to_file": true, "experiment_summaries_interval": 3, "flush_summaries_interval": 30, "stats_avg": 100, "summaries_use_frameskip": true, "heartbeat_interval": 10, "heartbeat_reporting_interval": 60, "train_for_env_steps": 100000000, "train_for_seconds": 10000000000, "save_every_sec": 120, "keep_checkpoints": 2, "load_checkpoint_kind": "latest", "save_milestones_sec": 1200, "save_best_every_sec": 5, "save_best_metric": "reward", "save_best_after": 100000, "benchmark": false, "encoder_mlp_layers": [ 512, 512 ], "encoder_conv_architecture": "convnet_atari", "encoder_conv_mlp_layers": [ 512 ], "use_rnn": false, "rnn_size": 512, "rnn_type": "gru", "rnn_num_layers": 1, "decoder_mlp_layers": [], "nonlinearity": "relu", "policy_initialization": "orthogonal", "policy_init_gain": 1.0, "actor_critic_share_weights": true, "adaptive_stddev": false, "continuous_tanh_scale": 0.0, "initial_stddev": 1.0, "use_env_info_cache": false, "env_gpu_actions": false, "env_gpu_observations": true, "env_frameskip": 4, "env_framestack": 4, } ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m sf_examples.atari.enjoy_atari --algo=APPO --env=atari_roadrunner --train_dir=./train_dir --experiment=APPO-atari_roadrunner ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m sf_examples.atari.train_atari --algo=APPO --env=atari_roadrunner --train_dir=./train_dir --experiment=APPO-atari_roadrunner --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
TheBloke/tora-70B-v1.0-GGUF
TheBloke
2023-10-14T20:36:25Z
73
2
transformers
[ "transformers", "gguf", "llama", "code", "math", "text-generation", "en", "dataset:gsm8k", "dataset:competition_math", "arxiv:2309.17452", "base_model:llm-agents/tora-70b-v1.0", "base_model:quantized:llm-agents/tora-70b-v1.0", "license:llama2", "region:us" ]
text-generation
2023-10-14T20:11:17Z
--- base_model: llm-agents/tora-70b-v1.0 datasets: - gsm8k - competition_math inference: false language: - en library_name: transformers license: llama2 metrics: - exact_match model_creator: LLM-Agents model_name: ToRA 70B v1.0 model_type: llama pipeline_tag: text-generation prompt_template: '<|user|> {prompt} <|assistant|> ' quantized_by: TheBloke tags: - code - math --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # ToRA 70B v1.0 - GGUF - Model creator: [LLM-Agents](https://huggingface.co/llm-agents) - Original model: [ToRA 70B v1.0](https://huggingface.co/llm-agents/tora-70b-v1.0) <!-- description start --> ## Description This repo contains GGUF format model files for [LLM-Agents's ToRA 70B v1.0](https://huggingface.co/llm-agents/tora-70b-v1.0). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/tora-70B-v1.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/tora-70B-v1.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/tora-70B-v1.0-GGUF) * [LLM-Agents's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/llm-agents/tora-70b-v1.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ToRA ``` <|user|> {prompt} <|assistant|> ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [tora-70b-v1.0.Q2_K.gguf](https://huggingface.co/TheBloke/tora-70B-v1.0-GGUF/blob/main/tora-70b-v1.0.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes | | [tora-70b-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/tora-70B-v1.0-GGUF/blob/main/tora-70b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss | | [tora-70b-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/tora-70B-v1.0-GGUF/blob/main/tora-70b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss | | [tora-70b-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/tora-70B-v1.0-GGUF/blob/main/tora-70b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss | | [tora-70b-v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/tora-70B-v1.0-GGUF/blob/main/tora-70b-v1.0.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [tora-70b-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/tora-70B-v1.0-GGUF/blob/main/tora-70b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss | | [tora-70b-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/tora-70B-v1.0-GGUF/blob/main/tora-70b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended | | [tora-70b-v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/tora-70B-v1.0-GGUF/blob/main/tora-70b-v1.0.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [tora-70b-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/tora-70B-v1.0-GGUF/blob/main/tora-70b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended | | [tora-70b-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/tora-70B-v1.0-GGUF/blob/main/tora-70b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended | | tora-70b-v1.0.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss | | tora-70b-v1.0.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `tora-70b-v1.0.Q6_K.gguf-split-a` * `tora-70b-v1.0.Q6_K.gguf-split-b` ### q8_0 Please download: * `tora-70b-v1.0.Q8_0.gguf-split-a` * `tora-70b-v1.0.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat tora-70b-v1.0.Q6_K.gguf-split-* > tora-70b-v1.0.Q6_K.gguf && rm tora-70b-v1.0.Q6_K.gguf-split-* cat tora-70b-v1.0.Q8_0.gguf-split-* > tora-70b-v1.0.Q8_0.gguf && rm tora-70b-v1.0.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B tora-70b-v1.0.Q6_K.gguf-split-a + tora-70b-v1.0.Q6_K.gguf-split-b tora-70b-v1.0.Q6_K.gguf del tora-70b-v1.0.Q6_K.gguf-split-a tora-70b-v1.0.Q6_K.gguf-split-b COPY /B tora-70b-v1.0.Q8_0.gguf-split-a + tora-70b-v1.0.Q8_0.gguf-split-b tora-70b-v1.0.Q8_0.gguf del tora-70b-v1.0.Q8_0.gguf-split-a tora-70b-v1.0.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/tora-70B-v1.0-GGUF and below it, a specific filename to download, such as: tora-70b-v1.0.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/tora-70B-v1.0-GGUF tora-70b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/tora-70B-v1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/tora-70B-v1.0-GGUF tora-70b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m tora-70b-v1.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|user|>\n{prompt}\n<|assistant|>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/tora-70B-v1.0-GGUF", model_file="tora-70b-v1.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: LLM-Agents's ToRA 70B v1.0 <h1 align="center"> ToRA: A Tool-Integrated Reasoning Agent <br> for Mathematical Problem Solving </h1> <p align="center"> <a href="https://microsoft.github.io/ToRA/"><b>[🌐 Website]</b></a> • <a href="https://arxiv.org/abs/2309.17452"><b>[📜 Paper]</b></a> • <a href="https://huggingface.co/llm-agents"><b>[🤗 HF Models]</b></a> • <a href="https://github.com/microsoft/ToRA"><b>[🐱 GitHub]</b></a> <br> <a href="https://twitter.com/zhs05232838/status/1708860992631763092"><b>[🐦 Twitter]</b></a> • <a href="https://www.reddit.com/r/LocalLLaMA/comments/1703k6d/tora_a_toolintegrated_reasoning_agent_for/"><b>[💬 Reddit]</b></a> • <a href="https://notes.aimodels.fyi/researchers-announce-tora-training-language-models-to-better-understand-math-using-external-tools/">[🍀 Unofficial Blog]</a> <!-- <a href="#-quick-start">Quick Start</a> • --> <!-- <a href="#%EF%B8%8F-citation">Citation</a> --> </p> <p align="center"> Repo for "<a href="https://arxiv.org/abs/2309.17452" target="_blank">ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving</a>" </p> ## 🔥 News - [2023/10/08] 🔥🔥🔥 All ToRA models released at [HuggingFace](https://huggingface.co/llm-agents)!!! - [2023/09/29] ToRA paper, repo, and website released. ## 💡 Introduction ToRA is a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical reasoning problems by interacting with tools, e.g., computation libraries and symbolic solvers. ToRA series seamlessly integrate natural language reasoning with the utilization of external tools, thereby amalgamating the analytical prowess of language and the computational efficiency of external tools. | Model | Size | GSM8k | MATH | AVG@10 math tasks<sup>&dagger;</sup> | |---|---|---|---|---| | GPT-4 | - | 92.0 | 42.5 | 78.3 | | GPT-4 (PAL) | - | 94.2 | 51.8 | 86.4 | | [ToRA-7B](https://huggingface.co/llm-agents/tora-7b-v1.0) | 7B | 68.8 | 40.1 | 62.4| | [ToRA-Code-7B](https://huggingface.co/llm-agents/tora-code-7b-v1.0) | 7B | 72.6 | 44.6 | 66.5| | [ToRA-13B](https://huggingface.co/llm-agents/tora-13b-v1.0) | 13B | 72.7 | 43.0 | 65.9| | [ToRA-Code-13B](https://huggingface.co/llm-agents/tora-code-13b-v1.0) | 13B | 75.8 | 48.1 | 71.3 | | [ToRA-Code-34B<sup>*</sup>](https://huggingface.co/llm-agents/tora-code-34b-v1.0) | 34B | 80.7 | **51.0** | 74.8 | | [ToRA-70B](https://huggingface.co/llm-agents/tora-70b-v1.0) | 70B | **84.3** | 49.7 | **76.9** | - <sup>*</sup>ToRA-Code-34B is currently the first and only open-source model to achieve over 50% accuracy (pass@1) on the MATH dataset, which significantly outperforms GPT-4’s CoT result (51.0 vs. 42.5), and is competitive with GPT-4 solving problems with programs. By open-sourcing our codes and models, we hope more breakthroughs will come! - <sup>&dagger;</sup>10 math tasks include GSM8k, MATH, GSM-Hard, SVAMP, TabMWP, ASDiv, SingleEQ, SingleOP, AddSub, and MultiArith. ## ⚡️ Training The models are trained on ToRA-Corpus 16k, which contains tool-integrated reasoning trajectories of MATH and GSM8k from GPT-4. We use imitation learning (i.e., SFT) to fine-tune the models, and then apply our proposed *output space shaping* to improve tool-integrated reasoning behaviors. Please refer to the [paper](https://arxiv.org/pdf/2309.17452.pdf) for more details. ## 🪁 Inference & Evaluation Please refer to ToRA's [GitHub repo](https://github.com/microsoft/ToRA) for inference, evaluation, and training code. ## ☕️ Citation If you find this repository helpful, please consider citing our paper: ``` @misc{gou2023tora, title={ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving}, author={Zhibin Gou and Zhihong Shao and Yeyun Gong and yelong shen and Yujiu Yang and Minlie Huang and Nan Duan and Weizhu Chen}, year={2023}, eprint={2309.17452}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- original-model-card end -->
victorlxh/iKG-v1.0
victorlxh
2023-10-14T20:31:16Z
4
5
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2302.13971", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-28T00:36:08Z
--- license: cc-by-nc-4.0 --- # ICKG Model Card ## Model Details ICKG (Integrated Contextual Knowledge Graph Generator) is a knowledge graph construction (KGC) task-specific instruction-following language model fine-tuned from LMSYS's Vicuna-7B, which itself is derived from Meta's LLaMA LLM. - **Developed by**: [Xiaohui Li](https://xiaohui-victor-li.github.io/) - **Model type**: Auto-regressive language model based on the transformer architecture. - **License**: Non-commercial - **Finetuned from model**: [Vicuna-7B](https://huggingface.co/lmsys/vicuna-7b-v1.3) (originally from [LLaMA](https://arxiv.org/abs/2302.13971)). ## Model Sources - **Repository**: [https://github.com/your-github-repo](https://github.com/your-github-repo) - **Website**: [https://xiaohui-victor-li.github.io/FinDKG/](https://xiaohui-victor-li.github.io/FinDKG/) - **Paper**: [https://arxiv.org/abs/your-paper-id](https://arxiv.org/abs/your-paper-id) ## Uses The primary use of iKG LLM is for generating knowledge graphs (KG) based on instruction-following capability with specialized prompts. It's intended for researchers, data scientists, and developers interested in natural language processing, and knowledge graph construction. ## How to Get Started with the Model - **Python Code**: [https://github.com/your-github-repo/tree/main#api](https://github.com/your-github-repo/tree/main#api) - **Command line interface of FastChat**: [https://github.com/your-github-repo#ikg-weights](https://github.com/your-github-repo#ikg-weights) ## Training Details iKG is fine-tuned from Vicuna-7B using ~3K instruction-following demonstrations including KG construction input document and extracted KG triplets as response output. iKG is thus learnt to extract list of KG triplets from given text document via prompt engineering. For more in-depth training details, refer to the "Generative Knowledge Graph Construction with Fine-tuned LLM" section of [the accompanying paper](https://arxiv.org/abs/your-paper-id). - **Prompt Template**: The entities and relationship can be customized for specific tasks. `<input_text>` is the document text to replace. ``` From the provided document labeled as INPUT_TEXT, your task is to extract structured information from it in the form of triplet for constructing a knowledge graph. Each tuple should be in the form of ('h', 'type', 'r', 'o', 'type'), where 'h' stands for the head entity, 'r' for the relationship, and 'o' for the tail entity. The 'type' denotes the category of the corresponding entity. Do NOT include redundant triplets, NOT include triplets with relationship that occurs in the past. Note that the entities should not be generic, numerical or temporal (like dates or percentages). Entities must be classified into the following categories: ORG: Organizations other than government or regulatory bodies ORG/GOV: Government bodies (e.g., "United States Government") ORG/REG: Regulatory bodies (e.g., "Federal Reserve") PERSON: Individuals (e.g., "Elon Musk") GPE: Geopolitical entities such as countries, cities, etc. (e.g., "Germany") COMP: Companies (e.g., "Google") PRODUCT: Products or services (e.g., "iPhone") EVENT: Specific and Material Events (e.g., "Olympic Games", "Covid-19") SECTOR: Company sectors or industries (e.g., "Technology sector") ECON_INDICATOR: Economic indicators (e.g., "Inflation rate"), numerical value like "10%" is not a ECON_INDICATOR; FIN_INSTRUMENT: Financial and market instruments (e.g., "Stocks", "Global Markets") CONCEPT: Abstract ideas or notions or themes (e.g., "Inflation", "AI", "Climate Change") The relationships 'r' between these entities must be represented by one of the following relation verbs set: Has, Announce, Operate_In, Introduce, Produce, Control, Participates_In, Impact, Positive_Impact_On, Negative_Impact_On, Relate_To, Is_Member_Of, Invests_In, Raise, Decrease. Remember to conduct entity disambiguation, consolidating different phrases or acronyms that refer to the same entity (for instance, "UK Central Bank", "BOE" and "Bank of England" should be unified as "Bank of England"). Simplify each entity of the triplet to be less than four words. Your output should strictly be in a list format of triplets in the JSON list format of ('h', 'type', 'r', 'o', 'type'), where the relationship 'r' must be in the given relation verbs set above. Only output the list. =========================================================== As an Example, consider the following news excerpt: 'Apple Inc. is set to introduce the new iPhone 14 in the technology sector this month. The product's release is likely to positively impact Apple's stock value.' From this text, your output should be: [('Apple Inc.', 'COMP', 'Introduce', 'iPhone 14', 'PRODUCT'), ('Apple Inc.', 'COMP', 'Operate_In', 'Technology Sector', 'SECTOR'), ('iPhone 14', 'PRODUCT', 'Positive_Impact_On', 'Apple's Stock Value', 'FIN_INSTRUMENT')] INPUT_TEXT: <input_text> ``` ## Evaluation iKG has undergone preliminary evaluation comparing its performance to GPT-3.5, GPT-4, and the original Vicuna-7B model. With respect to the KG construction task, it outperforms GPT-3.5 and Vicuna-7B while exhibiting comparative capability as GPT-4. iKG excels in generating instruction-based knowledge graphs with a particular emphasis on quality and adherence to format. For a more detailed introduction, refer to [the accompanying paper](https://arxiv.org/abs/your-paper-id).
HCNMonsch/PPO_LunarLander-v2
HCNMonsch
2023-10-14T20:16:59Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-17T11:58:20Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 279.30 +/- 14.01 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
liuhaotian/llava-v1.5-13b-shard3gb
liuhaotian
2023-10-14T20:02:23Z
39
14
transformers
[ "transformers", "pytorch", "llava", "text-generation", "autotrain_compatible", "region:us" ]
text-generation
2023-10-14T05:23:14Z
--- inference: false --- <br> <br> # LLaVA Model Card ## Model details **This is the same model checkpoint as [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b), except that this one is sharded with a 3GB per-shard size limit to support machines with limited CPU RAM.** **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. **Model date:** LLaVA-v1.5-13B was trained in September 2023. **Paper or resources for more information:** https://llava-vl.github.io/ ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. **Where to send questions or comments about the model:** https://github.com/haotian-liu/LLaVA/issues ## Intended use **Primary intended uses:** The primary use of LLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 450K academic-task-oriented VQA data mixture. - 40K ShareGPT data. ## Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
TheBloke/tora-13B-v1.0-GGUF
TheBloke
2023-10-14T19:36:30Z
57
3
transformers
[ "transformers", "gguf", "llama", "code", "math", "text-generation", "en", "dataset:gsm8k", "dataset:competition_math", "arxiv:2309.17452", "base_model:llm-agents/tora-13b-v1.0", "base_model:quantized:llm-agents/tora-13b-v1.0", "license:llama2", "region:us" ]
text-generation
2023-10-14T19:30:20Z
--- base_model: llm-agents/tora-13b-v1.0 datasets: - gsm8k - competition_math inference: false language: - en library_name: transformers license: llama2 metrics: - exact_match model_creator: LLM-Agents model_name: ToRA 13B v1.0 model_type: llama pipeline_tag: text-generation prompt_template: '<|user|> {prompt} <|assistant|> ' quantized_by: TheBloke tags: - code - math --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # ToRA 13B v1.0 - GGUF - Model creator: [LLM-Agents](https://huggingface.co/llm-agents) - Original model: [ToRA 13B v1.0](https://huggingface.co/llm-agents/tora-13b-v1.0) <!-- description start --> ## Description This repo contains GGUF format model files for [LLM-Agents's ToRA 13B v1.0](https://huggingface.co/llm-agents/tora-13b-v1.0). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/tora-13B-v1.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/tora-13B-v1.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF) * [LLM-Agents's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/llm-agents/tora-13b-v1.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ToRA ``` <|user|> {prompt} <|assistant|> ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [tora-13b-v1.0.Q2_K.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [tora-13b-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [tora-13b-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [tora-13b-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [tora-13b-v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [tora-13b-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [tora-13b-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [tora-13b-v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [tora-13b-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [tora-13b-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [tora-13b-v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [tora-13b-v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/tora-13B-v1.0-GGUF and below it, a specific filename to download, such as: tora-13b-v1.0.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/tora-13B-v1.0-GGUF tora-13b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/tora-13B-v1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/tora-13B-v1.0-GGUF tora-13b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m tora-13b-v1.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|user|>\n{prompt}\n<|assistant|>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/tora-13B-v1.0-GGUF", model_file="tora-13b-v1.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: LLM-Agents's ToRA 13B v1.0 <h1 align="center"> ToRA: A Tool-Integrated Reasoning Agent <br> for Mathematical Problem Solving </h1> <p align="center"> <a href="https://microsoft.github.io/ToRA/"><b>[🌐 Website]</b></a> • <a href="https://arxiv.org/abs/2309.17452"><b>[📜 Paper]</b></a> • <a href="https://huggingface.co/llm-agents"><b>[🤗 HF Models]</b></a> • <a href="https://github.com/microsoft/ToRA"><b>[🐱 GitHub]</b></a> <br> <a href="https://twitter.com/zhs05232838/status/1708860992631763092"><b>[🐦 Twitter]</b></a> • <a href="https://www.reddit.com/r/LocalLLaMA/comments/1703k6d/tora_a_toolintegrated_reasoning_agent_for/"><b>[💬 Reddit]</b></a> • <a href="https://notes.aimodels.fyi/researchers-announce-tora-training-language-models-to-better-understand-math-using-external-tools/">[🍀 Unofficial Blog]</a> <!-- <a href="#-quick-start">Quick Start</a> • --> <!-- <a href="#%EF%B8%8F-citation">Citation</a> --> </p> <p align="center"> Repo for "<a href="https://arxiv.org/abs/2309.17452" target="_blank">ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving</a>" </p> ## 🔥 News - [2023/10/08] 🔥🔥🔥 All ToRA models released at [HuggingFace](https://huggingface.co/llm-agents)!!! - [2023/09/29] ToRA paper, repo, and website released. ## 💡 Introduction ToRA is a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical reasoning problems by interacting with tools, e.g., computation libraries and symbolic solvers. ToRA series seamlessly integrate natural language reasoning with the utilization of external tools, thereby amalgamating the analytical prowess of language and the computational efficiency of external tools. | Model | Size | GSM8k | MATH | AVG@10 math tasks<sup>&dagger;</sup> | |---|---|---|---|---| | GPT-4 | - | 92.0 | 42.5 | 78.3 | | GPT-4 (PAL) | - | 94.2 | 51.8 | 86.4 | | [ToRA-7B](https://huggingface.co/llm-agents/tora-7b-v1.0) | 7B | 68.8 | 40.1 | 62.4| | [ToRA-Code-7B](https://huggingface.co/llm-agents/tora-code-7b-v1.0) | 7B | 72.6 | 44.6 | 66.5| | [ToRA-13B](https://huggingface.co/llm-agents/tora-13b-v1.0) | 13B | 72.7 | 43.0 | 65.9| | [ToRA-Code-13B](https://huggingface.co/llm-agents/tora-code-13b-v1.0) | 13B | 75.8 | 48.1 | 71.3 | | [ToRA-Code-34B<sup>*</sup>](https://huggingface.co/llm-agents/tora-code-34b-v1.0) | 34B | 80.7 | **51.0** | 74.8 | | [ToRA-70B](https://huggingface.co/llm-agents/tora-70b-v1.0) | 70B | **84.3** | 49.7 | **76.9** | - <sup>*</sup>ToRA-Code-34B is currently the first and only open-source model to achieve over 50% accuracy (pass@1) on the MATH dataset, which significantly outperforms GPT-4’s CoT result (51.0 vs. 42.5), and is competitive with GPT-4 solving problems with programs. By open-sourcing our codes and models, we hope more breakthroughs will come! - <sup>&dagger;</sup>10 math tasks include GSM8k, MATH, GSM-Hard, SVAMP, TabMWP, ASDiv, SingleEQ, SingleOP, AddSub, and MultiArith. ## ⚡️ Training The models are trained on ToRA-Corpus 16k, which contains tool-integrated reasoning trajectories of MATH and GSM8k from GPT-4. We use imitation learning (i.e., SFT) to fine-tune the models, and then apply our proposed *output space shaping* to improve tool-integrated reasoning behaviors. Please refer to the [paper](https://arxiv.org/pdf/2309.17452.pdf) for more details. ## 🪁 Inference & Evaluation Please refer to ToRA's [GitHub repo](https://github.com/microsoft/ToRA) for inference, evaluation, and training code. ## ☕️ Citation If you find this repository helpful, please consider citing our paper: ``` @misc{gou2023tora, title={ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving}, author={Zhibin Gou and Zhihong Shao and Yeyun Gong and yelong shen and Yujiu Yang and Minlie Huang and Nan Duan and Weizhu Chen}, year={2023}, eprint={2309.17452}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- original-model-card end -->
hungphongtrn/outputs
hungphongtrn
2023-10-14T19:35:48Z
0
0
null
[ "generated_from_trainer", "base_model:filipealmeida/Mistral-7B-Instruct-v0.1-sharded", "base_model:finetune:filipealmeida/Mistral-7B-Instruct-v0.1-sharded", "license:apache-2.0", "region:us" ]
null
2023-10-14T19:35:44Z
--- license: apache-2.0 base_model: filipealmeida/Mistral-7B-Instruct-v0.1-sharded tags: - generated_from_trainer model-index: - name: outputs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # outputs This model is a fine-tuned version of [filipealmeida/Mistral-7B-Instruct-v0.1-sharded](https://huggingface.co/filipealmeida/Mistral-7B-Instruct-v0.1-sharded) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - training_steps: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
Azma-AI/deberta-base-multi-label-classifier
Azma-AI
2023-10-14T19:31:58Z
32
1
transformers
[ "transformers", "pytorch", "safetensors", "deberta-v2", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-10T11:09:58Z
# Model Card for DeBERTa-v3-base-tasksource-nli This is [DeBERTa-v3-base](https://hf.co/microsoft/deberta-v3-base) fine-tuned with multi-task learning on 600 tasks. This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for: - Zero-shot entailment-based classification pipeline (similar to bart-mnli), see [ZS]. - Natural language inference, and many other tasks with tasksource-adapters, see [TA] - Further fine-tuning with a new task (classification, token classification or multiple-choice). # [ZS] Zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification",model="Azma-AI/deberta-base-multi-label-classifier") text = "one day I will see the world" candidate_labels = ['travel', 'cooking', 'dancing'] classifier(text, candidate_labels)
TheBloke/ShiningValiant-1.2-GPTQ
TheBloke
2023-10-14T19:30:32Z
24
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "shining-valiant", "valiant", "valiant-labs", "llama-2", "llama-2-chat", "70b", "en", "base_model:ValiantLabs/Llama2-70B-ShiningValiant", "base_model:quantized:ValiantLabs/Llama2-70B-ShiningValiant", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-14T16:16:57Z
--- base_model: ValiantLabs/ShiningValiant inference: false language: - en license: llama2 model_creator: Valiant Labs model_name: ShiningValiant 1.2 model_type: llama pipeline_tag: text-generation prompt_template: '[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don''t know the answer to a question, please don''t share false information. <</SYS>> {prompt}[/INST] ' quantized_by: TheBloke tags: - shining-valiant - valiant - valiant-labs - llama - llama-2 - llama-2-chat - 70b --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # ShiningValiant 1.2 - GPTQ - Model creator: [Valiant Labs](https://huggingface.co/ValiantLabs) - Original model: [ShiningValiant 1.2](https://huggingface.co/ValiantLabs/ShiningValiant) <!-- description start --> ## Description This repo contains GPTQ model files for [Valiant Labs's ShiningValiant 1.2](https://huggingface.co/ValiantLabs/ShiningValiant). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/ShiningValiant-1.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ShiningValiant-1.2-GPTQ) * [Valiant Labs's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ValiantLabs/ShiningValiant) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Llama-2-Chat ``` [INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> {prompt}[/INST] ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/ShiningValiant-1.2-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.33 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/ShiningValiant-1.2-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/ShiningValiant-1.2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/ShiningValiant-1.2-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 26.77 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/ShiningValiant-1.2-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/ShiningValiant-1.2-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 31.84 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/ShiningValiant-1.2-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/ShiningValiant-1.2-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `ShiningValiant-1.2-GPTQ`: ```shell mkdir ShiningValiant-1.2-GPTQ huggingface-cli download TheBloke/ShiningValiant-1.2-GPTQ --local-dir ShiningValiant-1.2-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir ShiningValiant-1.2-GPTQ huggingface-cli download TheBloke/ShiningValiant-1.2-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir ShiningValiant-1.2-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir ShiningValiant-1.2-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/ShiningValiant-1.2-GPTQ --local-dir ShiningValiant-1.2-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/ShiningValiant-1.2-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/ShiningValiant-1.2-GPTQ`. - To download from a specific branch, enter for example `TheBloke/ShiningValiant-1.2-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `ShiningValiant-1.2-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/ShiningValiant-1.2-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> {prompt}[/INST] ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/ShiningValiant-1.2-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> {prompt}[/INST] ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Valiant Labs's ShiningValiant 1.2 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/5rUJPhu_6LyDvSQogSVhk.jpeg) Shining Valiant is a chat model built on the Llama 2 architecture, finetuned on our data for insight, creativity, passion, and friendliness. - Uses the llama-2-70b-chat model, with safetensors - Finetuned on multiple runs across private and public data - Data focused on knowledge, enthusiasm, and structured reasoning ## Version The current version is **1.2**; congrats to our team on the new release! Previous versions remain available in the repository. New models will be released for everyone once our team's training and validation process is complete :) ## Prompting Shining Valiant uses the same prompt format as Llama 2 Chat - feel free to use your existing prompts and scripts! A few examples of different formats: 1. [INST] Good morning! Can you let me know how to parse a text file and turn the semicolons into commas? [/INST] 2. [INST] (You are an intelligent, helpful AI assistant.) Hello, can you write me a thank you letter? [/INST] 3. [INST] << SYS >>You are an intelligent, helpful AI assistant.<< /SYS >>Deep dive about a country with interesting history: [/INST] ## The Model Shining Valiant is built on top of Stellar Bright, which uses Llama 2's 70b parameter architecture and features upgraded general capability. (Stellar Bright uses public open source data only.) From there, we've created Shining Valiant through multiple finetuning runs on different compositions of our private dataset. Our private data focuses primarily on applying Shining Valiant's personality: she's friendly, enthusiastic, insightful, knowledgeable, and loves to learn! We are actively working on expanding and improving the Shining Valiant dataset for use in future releases of this model and others. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg) Shining Valiant is created by Valiant Labs. We care about open source. For everyone to use. We encourage others to finetune further from our models.
GrkmGuney/taybroks
GrkmGuney
2023-10-14T19:28:26Z
1
0
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-10-14T18:23:53Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: taybroks tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
jupitercoder/my_peft_model
jupitercoder
2023-10-14T18:43:22Z
1
0
peft
[ "peft", "region:us" ]
null
2023-10-14T18:43:20Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0
TheBloke/ALMA-7B-GPTQ
TheBloke
2023-10-14T18:40:45Z
20
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2309.11674", "base_model:haoranxu/ALMA-7B", "base_model:quantized:haoranxu/ALMA-7B", "license:mit", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-14T18:18:05Z
--- base_model: haoranxu/ALMA-7B inference: false license: mit model_creator: Haoran Xu model_name: ALMA 7B model_type: llama prompt_template: 'Translate this from Chinese to English: Chinese: {prompt} English: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # ALMA 7B - GPTQ - Model creator: [Haoran Xu](https://huggingface.co/haoranxu) - Original model: [ALMA 7B](https://huggingface.co/haoranxu/ALMA-7B) <!-- description start --> ## Description This repo contains GPTQ model files for [Haoran Xu's ALMA 7B](https://huggingface.co/haoranxu/ALMA-7B). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/ALMA-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ALMA-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/ALMA-7B-GGUF) * [Haoran Xu's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/haoranxu/ALMA-7B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ALMA ``` Translate this from Chinese to English: Chinese: {prompt} English: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Haoran Xu's ALMA 7B](https://huggingface.co/haoranxu/ALMA-7B). <!-- licensing end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/ALMA-7B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/ALMA-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/ALMA-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/ALMA-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/ALMA-7B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.62 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/ALMA-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/ALMA-7B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/ALMA-7B-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `ALMA-7B-GPTQ`: ```shell mkdir ALMA-7B-GPTQ huggingface-cli download TheBloke/ALMA-7B-GPTQ --local-dir ALMA-7B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir ALMA-7B-GPTQ huggingface-cli download TheBloke/ALMA-7B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir ALMA-7B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir ALMA-7B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/ALMA-7B-GPTQ --local-dir ALMA-7B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/ALMA-7B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/ALMA-7B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/ALMA-7B-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `ALMA-7B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/ALMA-7B-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Translate this from Chinese to English: Chinese: {prompt} English: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/ALMA-7B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''Translate this from Chinese to English: Chinese: {prompt} English: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Haoran Xu's ALMA 7B **ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance. Please find more details in our [paper](https://arxiv.org/abs/2309.11674). ``` @misc{xu2023paradigm, title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models}, author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla}, year={2023}, eprint={2309.11674}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` We release four translation models presented in the paper: - **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data - **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data - **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data - **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data Model checkpoints are released at huggingface: | Models | Base Model Link | LoRA Link | |:-------------:|:---------------:|:---------:| | ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - | | ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) | | ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - | | ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) | **Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models for translation purposes.** A quick start to use our best system (ALMA-13B-LoRA) for translation. An example of translating "我爱机器翻译。" into English: ``` import torch from peft import PeftModel from transformers import AutoModelForCausalLM from transformers import LlamaTokenizer # Load base model and LoRA weights model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto") model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA") tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left') # Add the source setence into the prompt template prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:" input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda() # Translation with torch.no_grad(): generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(outputs) ``` Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)
TheBloke/ALMA-13B-GGUF
TheBloke
2023-10-14T17:48:27Z
214
5
transformers
[ "transformers", "gguf", "llama", "arxiv:2309.11674", "base_model:haoranxu/ALMA-13B", "base_model:quantized:haoranxu/ALMA-13B", "license:mit", "region:us" ]
null
2023-10-14T17:32:55Z
--- base_model: haoranxu/ALMA-13B inference: false license: mit model_creator: Haoran Xu model_name: ALMA 13B model_type: llama prompt_template: 'Translate this from Chinese to English: Chinese: {prompt} English: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # ALMA 13B - GGUF - Model creator: [Haoran Xu](https://huggingface.co/haoranxu) - Original model: [ALMA 13B](https://huggingface.co/haoranxu/ALMA-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [Haoran Xu's ALMA 13B](https://huggingface.co/haoranxu/ALMA-13B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/ALMA-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ALMA-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/ALMA-13B-GGUF) * [Haoran Xu's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/haoranxu/ALMA-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ALMA ``` Translate this from Chinese to English: Chinese: {prompt} English: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Haoran Xu's ALMA 13B](https://huggingface.co/haoranxu/ALMA-13B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [alma-13b.Q2_K.gguf](https://huggingface.co/TheBloke/ALMA-13B-GGUF/blob/main/alma-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [alma-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/ALMA-13B-GGUF/blob/main/alma-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [alma-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/ALMA-13B-GGUF/blob/main/alma-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [alma-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/ALMA-13B-GGUF/blob/main/alma-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [alma-13b.Q4_0.gguf](https://huggingface.co/TheBloke/ALMA-13B-GGUF/blob/main/alma-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [alma-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/ALMA-13B-GGUF/blob/main/alma-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [alma-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/ALMA-13B-GGUF/blob/main/alma-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [alma-13b.Q5_0.gguf](https://huggingface.co/TheBloke/ALMA-13B-GGUF/blob/main/alma-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [alma-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/ALMA-13B-GGUF/blob/main/alma-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [alma-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/ALMA-13B-GGUF/blob/main/alma-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [alma-13b.Q6_K.gguf](https://huggingface.co/TheBloke/ALMA-13B-GGUF/blob/main/alma-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [alma-13b.Q8_0.gguf](https://huggingface.co/TheBloke/ALMA-13B-GGUF/blob/main/alma-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/ALMA-13B-GGUF and below it, a specific filename to download, such as: alma-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/ALMA-13B-GGUF alma-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/ALMA-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/ALMA-13B-GGUF alma-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m alma-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Translate this from Chinese to English:\nChinese: {prompt}\nEnglish:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/ALMA-13B-GGUF", model_file="alma-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Haoran Xu's ALMA 13B **ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance. Please find more details in our [paper](https://arxiv.org/abs/2309.11674). ``` @misc{xu2023paradigm, title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models}, author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla}, year={2023}, eprint={2309.11674}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` We release four translation models presented in the paper: - **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data - **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data - **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data - **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data Model checkpoints are released at huggingface: | Models | Base Model Link | LoRA Link | |:-------------:|:---------------:|:---------:| | ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - | | ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) | | ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - | | ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) | **Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models for translation purposes.** A quick start to use our best system (ALMA-13B-LoRA) for translation. An example of translating "我爱机器翻译。" into English: ``` import torch from peft import PeftModel from transformers import AutoModelForCausalLM from transformers import LlamaTokenizer # Load base model and LoRA weights model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto") model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA") tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left') # Add the source setence into the prompt template prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:" input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda() # Translation with torch.no_grad(): generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(outputs) ``` Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA) <!-- original-model-card end -->
leourbina/ppo-LunarLander-V2
leourbina
2023-10-14T17:38:32Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T17:06:41Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -371.08 +/- 90.69 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
aiknight87/llama-2-7b-hf-new1
aiknight87
2023-10-14T17:35:09Z
0
0
peft
[ "peft", "llama", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "4-bit", "bitsandbytes", "region:us" ]
null
2023-10-14T15:41:31Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
dhanushreddy29/BGRemovalTest
dhanushreddy29
2023-10-14T17:28:40Z
0
0
null
[ "license:mit", "region:us" ]
null
2023-10-14T17:12:53Z
--- title: Remove_Background app_file: main.py sdk: gradio sdk_version: 3.14.0 license: mit emoji: ⚡ colorFrom: indigo colorTo: blue ---
parasora/output
parasora
2023-10-14T17:27:54Z
0
0
null
[ "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:finetune:microsoft/phi-1_5", "license:other", "region:us" ]
null
2023-10-10T10:22:28Z
--- license: other base_model: microsoft/phi-1_5 tags: - generated_from_trainer model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
TheBloke/SauerkrautLM-13B-v1-GPTQ
TheBloke
2023-10-14T17:23:09Z
23
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "de", "en", "base_model:VAGOsolutions/SauerkrautLM-13b-v1", "base_model:quantized:VAGOsolutions/SauerkrautLM-13b-v1", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-14T16:32:32Z
--- base_model: VAGOsolutions/SauerkrautLM-13b-v1 inference: false language: - de - en library_name: transformers license: llama2 model_creator: VAGO solutions model_name: SauerkrautLM 13B v1 model_type: llama pipeline_tag: text-generation prompt_template: "Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent\ \ gibt hilfreiche, detaillierte und h\xF6fliche Antworten. \nUser: {prompt} \nAssistant:\n" quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # SauerkrautLM 13B v1 - GPTQ - Model creator: [VAGO solutions](https://huggingface.co/VAGOsolutions) - Original model: [SauerkrautLM 13B v1](https://huggingface.co/VAGOsolutions/SauerkrautLM-13b-v1) <!-- description start --> ## Description This repo contains GPTQ model files for [VAGO solutions's SauerkrautLM 13B v1](https://huggingface.co/VAGOsolutions/SauerkrautLM-13b-v1). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GGUF) * [VAGO solutions's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VAGOsolutions/SauerkrautLM-13b-v1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Sauerkraut ``` Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 14.54 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/SauerkrautLM-13B-v1-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/SauerkrautLM-13B-v1-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `SauerkrautLM-13B-v1-GPTQ`: ```shell mkdir SauerkrautLM-13B-v1-GPTQ huggingface-cli download TheBloke/SauerkrautLM-13B-v1-GPTQ --local-dir SauerkrautLM-13B-v1-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir SauerkrautLM-13B-v1-GPTQ huggingface-cli download TheBloke/SauerkrautLM-13B-v1-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir SauerkrautLM-13B-v1-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir SauerkrautLM-13B-v1-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SauerkrautLM-13B-v1-GPTQ --local-dir SauerkrautLM-13B-v1-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/SauerkrautLM-13B-v1-GPTQ`. - To download from a specific branch, enter for example `TheBloke/SauerkrautLM-13B-v1-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `SauerkrautLM-13B-v1-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/SauerkrautLM-13B-v1-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/SauerkrautLM-13B-v1-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: VAGO solutions's SauerkrautLM 13B v1 ![SauerkrautLM](images/SauerkrautLM.png "SauerkrautLM") ## VAGO solutions SauerkrautLM Introducing SauerkrautLM-v1 - Your German Language Powerhouse! We are thrilled to unveil our **very first release**, **SauerkrautLM-v1**. This remarkable creation marks a significant milestone as it is specifically **tailored for the German-speaking community**. In a landscape where German language models are scarce, we are proud to offer a solution that fills this void. What sets SauerkrautLM-v1 apart is its versatility. Whether you are an individual looking to harness its capabilities for personal use or a business seeking to integrate it into your projects, our model is designed to accommodate all. It operates under the LLAMA 2 License, providing you with the freedom to explore its potential in both private and commercial applications. Performance is at the heart of SauerkrautLM-v1. We put it to the **test using a customized version of MT-Bench for the German language**, and the results speak volumes. It currently stands as the most robust German Language Model on Hugging Face (based on german mt-bench results), showcasing its exceptional capabilities. Rest assured, this model is here to shine and set new standards. And the best thing is it comes in three different sizes (3B, 7B, 13B) to address your individual needs. Our model's journey began with meticulous training using an **augmented dataset within the QLoRA approach**. This is just the beginning of our model series, promising even more innovative and powerful solutions in the future. Join us on this exciting adventure as we redefine the possibilities of language modeling for the German-speaking world. SauerkrautLM-v1 is here to empower your language-related endeavors like never before. ## All Models | Model | HF | GPTQ | GGUF | |-------|-------|-------|-------| | SauerkrautLM-3b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-3b-v1) | soon | soon | | SauerkrautLM-7b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1) | soon | soon | | SauerkrautLM-7b-v1-mistral | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral) | soon | soon | | SauerkrautLM-13b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-13b-v1) | soon | soon | ## Model Details **SauerkrautLM-13b-v1** **Training Dataset:** SauerkrautLM was trained with mix of German data augmentation and translated data. We found, that only a simple translation of training data can lead to unnatural German phrasings. Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data. **Training Procedure:** SauerkrautLM-13b-v1 was fine-tuned using QLoRA on 1 A100 80GB with Axolotl. - **Trained by:** SauerkrautLM-v1 trained by VAGO solutions - **Model Type:** SauerkrautLM-v1 is an auto-regressive language model based on the transformer architecture - **Language(s):** German, English - **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) - **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected]) **Prompt Template:** ``` Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ``` ## Evaluation **[MT-Bench-TrueGerman](https://huggingface.co/datasets/VAGOsolutions/MT-Bench-TrueGerman)** ![First Turn](images/FirstTurn.PNG "First Turn") ![Second Turn](images/SecondTurn.PNG "Second Turn") ![Average](images/Average.PNG "Average") ![Category Scores](images/SauerkrautLM-13b.png "Category Scores") ![Category Plot](images/SauerkrautLM-13b-v1.png "Category Plot") ## Disclaimer Our models have been meticulously trained on extensive datasets. While we have made diligent efforts to thoroughly screen and eliminate any instances of coarse or inappropriate language from our data, we must inform users that despite our best efforts in data cleansing, the possibility of some such content slipping through cannot be entirely ruled out. Furthermore, it is important to note that we have implemented filters within our models; however, we cannot always guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the original Llama2 license remains applicable and is included with the model files.   ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions.   ## Collaborations We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
Dawoodthouseef/llama2-gguf
Dawoodthouseef
2023-10-14T17:19:06Z
34
1
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-2", "text-generation", "en", "arxiv:2307.09288", "region:us" ]
text-generation
2023-09-22T16:56:01Z
--- extra_gated_heading: Access Llama 2 on Hugging Face extra_gated_description: >- This is a form to enable access to Llama 2 on Hugging Face after you have been granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our license terms and acceptable use policy before submitting this form. Requests will be processed in 1-2 days. extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**" extra_gated_button_content: Submit extra_gated_fields: I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox language: - en pipeline_tag: text-generation inference: false tags: - facebook - meta - pytorch - llama - llama-2 --- # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ``` pip install llama-cpp-python==0.2.6 ``` Then you can use the model like this: ```python from llama_cpp import Llama llm = Llama(model_path="./models/7B/llama-model.gguf") output = llm("Q: Name the planets in the solar system? A: ", max_tokens=32, stop=["Q:", "\n"], echo=True) print(output) ``` ### EXample Output of the above code ``` { "id": "cmpl-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "object": "text_completion", "created": 1679561337, "model": "./models/7B/llama-model.gguf", "choices": [ { "text": "Q: Name the planets in the solar system? A: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto.", "index": 0, "logprobs": None, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 14, "completion_tokens": 28, "total_tokens": 42 } } ``` ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
Reangsy/pegasus-xsum-transcript
Reangsy
2023-10-14T17:01:41Z
5
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-xsum", "base_model:finetune:google/pegasus-xsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-10-14T16:56:46Z
--- base_model: google/pegasus-xsum tags: - generated_from_trainer model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
NourhanAbosaeed/dummy-model
NourhanAbosaeed
2023-10-14T16:53:10Z
4
0
transformers
[ "transformers", "tf", "distilbert", "feature-extraction", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased-finetuned-sst-2-english", "base_model:finetune:distilbert/distilbert-base-uncased-finetuned-sst-2-english", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2023-10-14T16:51:20Z
--- license: apache-2.0 base_model: distilbert-base-uncased-finetuned-sst-2-english tags: - generated_from_keras_callback model-index: - name: dummy-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dummy-model This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.34.0 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.14.1
joseval2001/mi-super-modelo
joseval2001
2023-10-14T16:51:24Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:yelp_review_full", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-14T16:35:41Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - yelp_review_full metrics: - accuracy model-index: - name: mi-super-modelo results: - task: name: Text Classification type: text-classification dataset: name: yelp_review_full type: yelp_review_full config: yelp_review_full split: test args: yelp_review_full metrics: - name: Accuracy type: accuracy value: 0.3 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mi-super-modelo This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset. It achieves the following results on the evaluation set: - Loss: 1.5515 - Accuracy: 0.3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6284 | 0.5 | 5 | 1.5730 | 0.275 | | 1.6939 | 1.0 | 10 | 1.5515 | 0.3 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
TheBloke/SauerkrautLM-13B-v1-GGUF
TheBloke
2023-10-14T16:47:34Z
98
3
transformers
[ "transformers", "gguf", "llama", "text-generation", "de", "en", "base_model:VAGOsolutions/SauerkrautLM-13b-v1", "base_model:quantized:VAGOsolutions/SauerkrautLM-13b-v1", "license:llama2", "region:us" ]
text-generation
2023-10-14T16:32:22Z
--- base_model: VAGOsolutions/SauerkrautLM-13b-v1 inference: false language: - de - en library_name: transformers license: llama2 model_creator: VAGO solutions model_name: SauerkrautLM 13B v1 model_type: llama pipeline_tag: text-generation prompt_template: "Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent\ \ gibt hilfreiche, detaillierte und h\xF6fliche Antworten. \nUser: {prompt} \nAssistant:\n" quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # SauerkrautLM 13B v1 - GGUF - Model creator: [VAGO solutions](https://huggingface.co/VAGOsolutions) - Original model: [SauerkrautLM 13B v1](https://huggingface.co/VAGOsolutions/SauerkrautLM-13b-v1) <!-- description start --> ## Description This repo contains GGUF format model files for [VAGO solutions's SauerkrautLM 13B v1](https://huggingface.co/VAGOsolutions/SauerkrautLM-13b-v1). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GGUF) * [VAGO solutions's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VAGOsolutions/SauerkrautLM-13b-v1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Sauerkraut ``` Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [sauerkrautlm-13b-v1.Q2_K.gguf](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GGUF/blob/main/sauerkrautlm-13b-v1.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [sauerkrautlm-13b-v1.Q3_K_S.gguf](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GGUF/blob/main/sauerkrautlm-13b-v1.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [sauerkrautlm-13b-v1.Q3_K_M.gguf](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GGUF/blob/main/sauerkrautlm-13b-v1.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [sauerkrautlm-13b-v1.Q3_K_L.gguf](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GGUF/blob/main/sauerkrautlm-13b-v1.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [sauerkrautlm-13b-v1.Q4_0.gguf](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GGUF/blob/main/sauerkrautlm-13b-v1.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [sauerkrautlm-13b-v1.Q4_K_S.gguf](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GGUF/blob/main/sauerkrautlm-13b-v1.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [sauerkrautlm-13b-v1.Q4_K_M.gguf](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GGUF/blob/main/sauerkrautlm-13b-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [sauerkrautlm-13b-v1.Q5_0.gguf](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GGUF/blob/main/sauerkrautlm-13b-v1.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [sauerkrautlm-13b-v1.Q5_K_S.gguf](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GGUF/blob/main/sauerkrautlm-13b-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [sauerkrautlm-13b-v1.Q5_K_M.gguf](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GGUF/blob/main/sauerkrautlm-13b-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [sauerkrautlm-13b-v1.Q6_K.gguf](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GGUF/blob/main/sauerkrautlm-13b-v1.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [sauerkrautlm-13b-v1.Q8_0.gguf](https://huggingface.co/TheBloke/SauerkrautLM-13B-v1-GGUF/blob/main/sauerkrautlm-13b-v1.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/SauerkrautLM-13B-v1-GGUF and below it, a specific filename to download, such as: sauerkrautlm-13b-v1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/SauerkrautLM-13B-v1-GGUF sauerkrautlm-13b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/SauerkrautLM-13B-v1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SauerkrautLM-13B-v1-GGUF sauerkrautlm-13b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m sauerkrautlm-13b-v1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. \nUser: {prompt} \nAssistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/SauerkrautLM-13B-v1-GGUF", model_file="sauerkrautlm-13b-v1.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: VAGO solutions's SauerkrautLM 13B v1 ![SauerkrautLM](images/SauerkrautLM.png "SauerkrautLM") ## VAGO solutions SauerkrautLM Introducing SauerkrautLM-v1 - Your German Language Powerhouse! We are thrilled to unveil our **very first release**, **SauerkrautLM-v1**. This remarkable creation marks a significant milestone as it is specifically **tailored for the German-speaking community**. In a landscape where German language models are scarce, we are proud to offer a solution that fills this void. What sets SauerkrautLM-v1 apart is its versatility. Whether you are an individual looking to harness its capabilities for personal use or a business seeking to integrate it into your projects, our model is designed to accommodate all. It operates under the LLAMA 2 License, providing you with the freedom to explore its potential in both private and commercial applications. Performance is at the heart of SauerkrautLM-v1. We put it to the **test using a customized version of MT-Bench for the German language**, and the results speak volumes. It currently stands as the most robust German Language Model on Hugging Face (based on german mt-bench results), showcasing its exceptional capabilities. Rest assured, this model is here to shine and set new standards. And the best thing is it comes in three different sizes (3B, 7B, 13B) to address your individual needs. Our model's journey began with meticulous training using an **augmented dataset within the QLoRA approach**. This is just the beginning of our model series, promising even more innovative and powerful solutions in the future. Join us on this exciting adventure as we redefine the possibilities of language modeling for the German-speaking world. SauerkrautLM-v1 is here to empower your language-related endeavors like never before. ## All Models | Model | HF | GPTQ | GGUF | |-------|-------|-------|-------| | SauerkrautLM-3b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-3b-v1) | soon | soon | | SauerkrautLM-7b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1) | soon | soon | | SauerkrautLM-7b-v1-mistral | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral) | soon | soon | | SauerkrautLM-13b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-13b-v1) | soon | soon | ## Model Details **SauerkrautLM-13b-v1** **Training Dataset:** SauerkrautLM was trained with mix of German data augmentation and translated data. We found, that only a simple translation of training data can lead to unnatural German phrasings. Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data. **Training Procedure:** SauerkrautLM-13b-v1 was fine-tuned using QLoRA on 1 A100 80GB with Axolotl. - **Trained by:** SauerkrautLM-v1 trained by VAGO solutions - **Model Type:** SauerkrautLM-v1 is an auto-regressive language model based on the transformer architecture - **Language(s):** German, English - **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) - **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected]) **Prompt Template:** ``` Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten. User: {prompt} Assistant: ``` ## Evaluation **[MT-Bench-TrueGerman](https://huggingface.co/datasets/VAGOsolutions/MT-Bench-TrueGerman)** ![First Turn](images/FirstTurn.PNG "First Turn") ![Second Turn](images/SecondTurn.PNG "Second Turn") ![Average](images/Average.PNG "Average") ![Category Scores](images/SauerkrautLM-13b.png "Category Scores") ![Category Plot](images/SauerkrautLM-13b-v1.png "Category Plot") ## Disclaimer Our models have been meticulously trained on extensive datasets. While we have made diligent efforts to thoroughly screen and eliminate any instances of coarse or inappropriate language from our data, we must inform users that despite our best efforts in data cleansing, the possibility of some such content slipping through cannot be entirely ruled out. Furthermore, it is important to note that we have implemented filters within our models; however, we cannot always guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the original Llama2 license remains applicable and is included with the model files. ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions. ## Collaborations We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us. <!-- original-model-card end -->
TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ
TheBloke
2023-10-14T16:28:07Z
34
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "base_model:abdgrt/Tinyllama-2-1b-miniguanaco", "base_model:quantized:abdgrt/Tinyllama-2-1b-miniguanaco", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-11T03:20:35Z
--- base_model: abdgrt/Tinyllama-2-1b-miniguanaco inference: false license: other model_creator: Odunusi Abraham Ayoola model_name: Tinyllama 2 1B MiniGuanaco model_type: llama prompt_template: '### Human: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Tinyllama 2 1B MiniGuanaco - GPTQ - Model creator: [Odunusi Abraham Ayoola](https://huggingface.co/abdgrt) - Original model: [Tinyllama 2 1B MiniGuanaco](https://huggingface.co/abdgrt/Tinyllama-2-1b-miniguanaco) <!-- description start --> ## Description This repo contains GPTQ model files for [Odunusi Abraham Ayoola's Tinyllama 2 1B MiniGuanaco](https://huggingface.co/abdgrt/Tinyllama-2-1b-miniguanaco). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF) * [Odunusi Abraham Ayoola's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/abdgrt/Tinyllama-2-1b-miniguanaco) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Guanaco ``` ### Human: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 0.77 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 0.82 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 1.23 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 1.26 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 1.32 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 0.79 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Tinyllama-2-1b-miniguanaco-GPTQ`: ```shell mkdir Tinyllama-2-1b-miniguanaco-GPTQ huggingface-cli download TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ --local-dir Tinyllama-2-1b-miniguanaco-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Tinyllama-2-1b-miniguanaco-GPTQ huggingface-cli download TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Tinyllama-2-1b-miniguanaco-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Tinyllama-2-1b-miniguanaco-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ --local-dir Tinyllama-2-1b-miniguanaco-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Tinyllama-2-1b-miniguanaco-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''### Human: {prompt} ### Assistant: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''### Human: {prompt} ### Assistant: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Odunusi Abraham Ayoola's Tinyllama 2 1B MiniGuanaco No original model card was available.
TheBloke/Tinyllama-2-1b-miniguanaco-GGUF
TheBloke
2023-10-14T16:21:02Z
1,461
16
transformers
[ "transformers", "gguf", "llama", "base_model:abdgrt/Tinyllama-2-1b-miniguanaco", "base_model:quantized:abdgrt/Tinyllama-2-1b-miniguanaco", "license:other", "region:us" ]
null
2023-10-11T03:20:35Z
--- base_model: abdgrt/Tinyllama-2-1b-miniguanaco inference: false license: other model_creator: Odunusi Abraham Ayoola model_name: Tinyllama 2 1B MiniGuanaco model_type: llama prompt_template: '### Human: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Tinyllama 2 1B MiniGuanaco - GGUF - Model creator: [Odunusi Abraham Ayoola](https://huggingface.co/abdgrt) - Original model: [Tinyllama 2 1B MiniGuanaco](https://huggingface.co/abdgrt/Tinyllama-2-1b-miniguanaco) <!-- description start --> ## Description This repo contains GGUF format model files for [Odunusi Abraham Ayoola's Tinyllama 2 1B MiniGuanaco](https://huggingface.co/abdgrt/Tinyllama-2-1b-miniguanaco). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF) * [Odunusi Abraham Ayoola's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/abdgrt/Tinyllama-2-1b-miniguanaco) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Guanaco ``` ### Human: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [tinyllama-2-1b-miniguanaco.Q2_K.gguf](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF/blob/main/tinyllama-2-1b-miniguanaco.Q2_K.gguf) | Q2_K | 2 | 0.48 GB| 2.98 GB | smallest, significant quality loss - not recommended for most purposes | | [tinyllama-2-1b-miniguanaco.Q3_K_S.gguf](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF/blob/main/tinyllama-2-1b-miniguanaco.Q3_K_S.gguf) | Q3_K_S | 3 | 0.50 GB| 3.00 GB | very small, high quality loss | | [tinyllama-2-1b-miniguanaco.Q3_K_M.gguf](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF/blob/main/tinyllama-2-1b-miniguanaco.Q3_K_M.gguf) | Q3_K_M | 3 | 0.55 GB| 3.05 GB | very small, high quality loss | | [tinyllama-2-1b-miniguanaco.Q3_K_L.gguf](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF/blob/main/tinyllama-2-1b-miniguanaco.Q3_K_L.gguf) | Q3_K_L | 3 | 0.59 GB| 3.09 GB | small, substantial quality loss | | [tinyllama-2-1b-miniguanaco.Q4_0.gguf](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF/blob/main/tinyllama-2-1b-miniguanaco.Q4_0.gguf) | Q4_0 | 4 | 0.64 GB| 3.14 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [tinyllama-2-1b-miniguanaco.Q4_K_S.gguf](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF/blob/main/tinyllama-2-1b-miniguanaco.Q4_K_S.gguf) | Q4_K_S | 4 | 0.64 GB| 3.14 GB | small, greater quality loss | | [tinyllama-2-1b-miniguanaco.Q4_K_M.gguf](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF/blob/main/tinyllama-2-1b-miniguanaco.Q4_K_M.gguf) | Q4_K_M | 4 | 0.67 GB| 3.17 GB | medium, balanced quality - recommended | | [tinyllama-2-1b-miniguanaco.Q5_0.gguf](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF/blob/main/tinyllama-2-1b-miniguanaco.Q5_0.gguf) | Q5_0 | 5 | 0.77 GB| 3.27 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [tinyllama-2-1b-miniguanaco.Q5_K_S.gguf](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF/blob/main/tinyllama-2-1b-miniguanaco.Q5_K_S.gguf) | Q5_K_S | 5 | 0.77 GB| 3.27 GB | large, low quality loss - recommended | | [tinyllama-2-1b-miniguanaco.Q5_K_M.gguf](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF/blob/main/tinyllama-2-1b-miniguanaco.Q5_K_M.gguf) | Q5_K_M | 5 | 0.78 GB| 3.28 GB | large, very low quality loss - recommended | | [tinyllama-2-1b-miniguanaco.Q6_K.gguf](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF/blob/main/tinyllama-2-1b-miniguanaco.Q6_K.gguf) | Q6_K | 6 | 0.90 GB| 3.40 GB | very large, extremely low quality loss | | [tinyllama-2-1b-miniguanaco.Q8_0.gguf](https://huggingface.co/TheBloke/Tinyllama-2-1b-miniguanaco-GGUF/blob/main/tinyllama-2-1b-miniguanaco.Q8_0.gguf) | Q8_0 | 8 | 1.17 GB| 3.67 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Tinyllama-2-1b-miniguanaco-GGUF and below it, a specific filename to download, such as: tinyllama-2-1b-miniguanaco.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Tinyllama-2-1b-miniguanaco-GGUF tinyllama-2-1b-miniguanaco.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Tinyllama-2-1b-miniguanaco-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Tinyllama-2-1b-miniguanaco-GGUF tinyllama-2-1b-miniguanaco.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m tinyllama-2-1b-miniguanaco.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Human: {prompt}\n### Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Tinyllama-2-1b-miniguanaco-GGUF", model_file="tinyllama-2-1b-miniguanaco.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Odunusi Abraham Ayoola's Tinyllama 2 1B MiniGuanaco No original model card was available. <!-- original-model-card end -->
gkalsrudals/use_data_finetuning
gkalsrudals
2023-10-14T16:19:50Z
28
0
transformers
[ "transformers", "pytorch", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2023-10-14T12:00:38Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer model-index: - name: use_data_finetuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # use_data_finetuning This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
mauricioplopes/facemask-model
mauricioplopes
2023-10-14T16:16:40Z
0
0
fastai
[ "fastai", "am", "license:apache-2.0", "region:us" ]
null
2023-10-14T07:07:16Z
--- license: apache-2.0 language: - am library_name: fastai ---
brad559/ppo-LunarLander-v2
brad559
2023-10-14T16:14:37Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T16:14:19Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 256.48 +/- 20.25 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jncraton/gte-tiny-ct2-int8
jncraton
2023-10-14T15:44:06Z
7
0
sentence-transformers
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "mteb", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-10-14T15:32:30Z
--- model-index: - name: gte_tiny results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.76119402985076 - type: ap value: 34.63659287952359 - type: f1 value: 65.88939512571113 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 86.61324999999998 - type: ap value: 81.7476302802319 - type: f1 value: 86.5863470912001 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 42.61000000000001 - type: f1 value: 42.2217180000715 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 28.377999999999997 - type: map_at_10 value: 44.565 - type: map_at_100 value: 45.48 - type: map_at_1000 value: 45.487 - type: map_at_3 value: 39.841 - type: map_at_5 value: 42.284 - type: mrr_at_1 value: 29.445 - type: mrr_at_10 value: 44.956 - type: mrr_at_100 value: 45.877 - type: mrr_at_1000 value: 45.884 - type: mrr_at_3 value: 40.209 - type: mrr_at_5 value: 42.719 - type: ndcg_at_1 value: 28.377999999999997 - type: ndcg_at_10 value: 53.638 - type: ndcg_at_100 value: 57.354000000000006 - type: ndcg_at_1000 value: 57.513000000000005 - type: ndcg_at_3 value: 43.701 - type: ndcg_at_5 value: 48.114000000000004 - type: precision_at_1 value: 28.377999999999997 - type: precision_at_10 value: 8.272 - type: precision_at_100 value: 0.984 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.303 - type: precision_at_5 value: 13.129 - type: recall_at_1 value: 28.377999999999997 - type: recall_at_10 value: 82.717 - type: recall_at_100 value: 98.43499999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 54.908 - type: recall_at_5 value: 65.647 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 46.637318326729876 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 36.01134479855804 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 59.82917555338909 - type: mrr value: 74.7888361254012 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 87.1657730995964 - type: cos_sim_spearman value: 86.62787748941281 - type: euclidean_pearson value: 85.48127914481798 - type: euclidean_spearman value: 86.48148861167424 - type: manhattan_pearson value: 85.07496934780823 - type: manhattan_spearman value: 86.39473964708843 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 81.73051948051948 - type: f1 value: 81.66368364988331 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.18623707448217 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 32.12697757150375 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.160000000000004 - type: map_at_10 value: 40.474 - type: map_at_100 value: 41.905 - type: map_at_1000 value: 42.041000000000004 - type: map_at_3 value: 37.147000000000006 - type: map_at_5 value: 38.873999999999995 - type: mrr_at_1 value: 36.91 - type: mrr_at_10 value: 46.495999999999995 - type: mrr_at_100 value: 47.288000000000004 - type: mrr_at_1000 value: 47.339999999999996 - type: mrr_at_3 value: 43.777 - type: mrr_at_5 value: 45.257999999999996 - type: ndcg_at_1 value: 36.91 - type: ndcg_at_10 value: 46.722 - type: ndcg_at_100 value: 51.969 - type: ndcg_at_1000 value: 54.232 - type: ndcg_at_3 value: 41.783 - type: ndcg_at_5 value: 43.797000000000004 - type: precision_at_1 value: 36.91 - type: precision_at_10 value: 9.013 - type: precision_at_100 value: 1.455 - type: precision_at_1000 value: 0.193 - type: precision_at_3 value: 20.124 - type: precision_at_5 value: 14.363000000000001 - type: recall_at_1 value: 29.160000000000004 - type: recall_at_10 value: 58.521 - type: recall_at_100 value: 80.323 - type: recall_at_1000 value: 95.13000000000001 - type: recall_at_3 value: 44.205 - type: recall_at_5 value: 49.97 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.750000000000004 - type: map_at_10 value: 36.39 - type: map_at_100 value: 37.5 - type: map_at_1000 value: 37.625 - type: map_at_3 value: 33.853 - type: map_at_5 value: 35.397 - type: mrr_at_1 value: 34.14 - type: mrr_at_10 value: 41.841 - type: mrr_at_100 value: 42.469 - type: mrr_at_1000 value: 42.521 - type: mrr_at_3 value: 39.724 - type: mrr_at_5 value: 40.955999999999996 - type: ndcg_at_1 value: 34.14 - type: ndcg_at_10 value: 41.409 - type: ndcg_at_100 value: 45.668 - type: ndcg_at_1000 value: 47.916 - type: ndcg_at_3 value: 37.836 - type: ndcg_at_5 value: 39.650999999999996 - type: precision_at_1 value: 34.14 - type: precision_at_10 value: 7.739 - type: precision_at_100 value: 1.2630000000000001 - type: precision_at_1000 value: 0.173 - type: precision_at_3 value: 18.217 - type: precision_at_5 value: 12.854 - type: recall_at_1 value: 27.750000000000004 - type: recall_at_10 value: 49.882 - type: recall_at_100 value: 68.556 - type: recall_at_1000 value: 83.186 - type: recall_at_3 value: 39.047 - type: recall_at_5 value: 44.458 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 36.879 - type: map_at_10 value: 48.878 - type: map_at_100 value: 49.918 - type: map_at_1000 value: 49.978 - type: map_at_3 value: 45.867999999999995 - type: map_at_5 value: 47.637 - type: mrr_at_1 value: 42.696 - type: mrr_at_10 value: 52.342 - type: mrr_at_100 value: 53.044000000000004 - type: mrr_at_1000 value: 53.077 - type: mrr_at_3 value: 50.01 - type: mrr_at_5 value: 51.437 - type: ndcg_at_1 value: 42.696 - type: ndcg_at_10 value: 54.469 - type: ndcg_at_100 value: 58.664 - type: ndcg_at_1000 value: 59.951 - type: ndcg_at_3 value: 49.419999999999995 - type: ndcg_at_5 value: 52.007000000000005 - type: precision_at_1 value: 42.696 - type: precision_at_10 value: 8.734 - type: precision_at_100 value: 1.1769999999999998 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 22.027 - type: precision_at_5 value: 15.135000000000002 - type: recall_at_1 value: 36.879 - type: recall_at_10 value: 67.669 - type: recall_at_100 value: 85.822 - type: recall_at_1000 value: 95.092 - type: recall_at_3 value: 54.157999999999994 - type: recall_at_5 value: 60.436 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.942 - type: map_at_10 value: 31.741999999999997 - type: map_at_100 value: 32.721000000000004 - type: map_at_1000 value: 32.809 - type: map_at_3 value: 29.17 - type: map_at_5 value: 30.714000000000002 - type: mrr_at_1 value: 24.746000000000002 - type: mrr_at_10 value: 33.517 - type: mrr_at_100 value: 34.451 - type: mrr_at_1000 value: 34.522000000000006 - type: mrr_at_3 value: 31.148999999999997 - type: mrr_at_5 value: 32.606 - type: ndcg_at_1 value: 24.746000000000002 - type: ndcg_at_10 value: 36.553000000000004 - type: ndcg_at_100 value: 41.53 - type: ndcg_at_1000 value: 43.811 - type: ndcg_at_3 value: 31.674000000000003 - type: ndcg_at_5 value: 34.241 - type: precision_at_1 value: 24.746000000000002 - type: precision_at_10 value: 5.684 - type: precision_at_100 value: 0.859 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 13.597000000000001 - type: precision_at_5 value: 9.672 - type: recall_at_1 value: 22.942 - type: recall_at_10 value: 49.58 - type: recall_at_100 value: 72.614 - type: recall_at_1000 value: 89.89200000000001 - type: recall_at_3 value: 36.552 - type: recall_at_5 value: 42.702 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 15.345 - type: map_at_10 value: 22.428 - type: map_at_100 value: 23.756 - type: map_at_1000 value: 23.872 - type: map_at_3 value: 20.212 - type: map_at_5 value: 21.291 - type: mrr_at_1 value: 19.279 - type: mrr_at_10 value: 27.1 - type: mrr_at_100 value: 28.211000000000002 - type: mrr_at_1000 value: 28.279 - type: mrr_at_3 value: 24.813 - type: mrr_at_5 value: 25.889 - type: ndcg_at_1 value: 19.279 - type: ndcg_at_10 value: 27.36 - type: ndcg_at_100 value: 33.499 - type: ndcg_at_1000 value: 36.452 - type: ndcg_at_3 value: 23.233999999999998 - type: ndcg_at_5 value: 24.806 - type: precision_at_1 value: 19.279 - type: precision_at_10 value: 5.149 - type: precision_at_100 value: 0.938 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 11.360000000000001 - type: precision_at_5 value: 8.035 - type: recall_at_1 value: 15.345 - type: recall_at_10 value: 37.974999999999994 - type: recall_at_100 value: 64.472 - type: recall_at_1000 value: 85.97200000000001 - type: recall_at_3 value: 26.203 - type: recall_at_5 value: 30.485 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.362000000000002 - type: map_at_10 value: 36.406 - type: map_at_100 value: 37.726 - type: map_at_1000 value: 37.84 - type: map_at_3 value: 33.425 - type: map_at_5 value: 35.043 - type: mrr_at_1 value: 32.146 - type: mrr_at_10 value: 41.674 - type: mrr_at_100 value: 42.478 - type: mrr_at_1000 value: 42.524 - type: mrr_at_3 value: 38.948 - type: mrr_at_5 value: 40.415 - type: ndcg_at_1 value: 32.146 - type: ndcg_at_10 value: 42.374 - type: ndcg_at_100 value: 47.919 - type: ndcg_at_1000 value: 50.013 - type: ndcg_at_3 value: 37.29 - type: ndcg_at_5 value: 39.531 - type: precision_at_1 value: 32.146 - type: precision_at_10 value: 7.767 - type: precision_at_100 value: 1.236 - type: precision_at_1000 value: 0.16 - type: precision_at_3 value: 17.965999999999998 - type: precision_at_5 value: 12.742999999999999 - type: recall_at_1 value: 26.362000000000002 - type: recall_at_10 value: 54.98800000000001 - type: recall_at_100 value: 78.50200000000001 - type: recall_at_1000 value: 92.146 - type: recall_at_3 value: 40.486 - type: recall_at_5 value: 46.236 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.417 - type: map_at_10 value: 33.161 - type: map_at_100 value: 34.357 - type: map_at_1000 value: 34.473 - type: map_at_3 value: 30.245 - type: map_at_5 value: 31.541999999999998 - type: mrr_at_1 value: 29.909000000000002 - type: mrr_at_10 value: 38.211 - type: mrr_at_100 value: 39.056999999999995 - type: mrr_at_1000 value: 39.114 - type: mrr_at_3 value: 35.769 - type: mrr_at_5 value: 36.922 - type: ndcg_at_1 value: 29.909000000000002 - type: ndcg_at_10 value: 38.694 - type: ndcg_at_100 value: 44.057 - type: ndcg_at_1000 value: 46.6 - type: ndcg_at_3 value: 33.822 - type: ndcg_at_5 value: 35.454 - type: precision_at_1 value: 29.909000000000002 - type: precision_at_10 value: 7.180000000000001 - type: precision_at_100 value: 1.153 - type: precision_at_1000 value: 0.155 - type: precision_at_3 value: 16.134 - type: precision_at_5 value: 11.256 - type: recall_at_1 value: 24.417 - type: recall_at_10 value: 50.260000000000005 - type: recall_at_100 value: 73.55699999999999 - type: recall_at_1000 value: 91.216 - type: recall_at_3 value: 35.971 - type: recall_at_5 value: 40.793 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.266916666666663 - type: map_at_10 value: 32.75025 - type: map_at_100 value: 33.91341666666667 - type: map_at_1000 value: 34.031749999999995 - type: map_at_3 value: 30.166416666666674 - type: map_at_5 value: 31.577000000000005 - type: mrr_at_1 value: 28.828166666666664 - type: mrr_at_10 value: 36.80991666666667 - type: mrr_at_100 value: 37.67075 - type: mrr_at_1000 value: 37.733 - type: mrr_at_3 value: 34.513416666666664 - type: mrr_at_5 value: 35.788 - type: ndcg_at_1 value: 28.828166666666664 - type: ndcg_at_10 value: 37.796 - type: ndcg_at_100 value: 42.94783333333333 - type: ndcg_at_1000 value: 45.38908333333333 - type: ndcg_at_3 value: 33.374750000000006 - type: ndcg_at_5 value: 35.379666666666665 - type: precision_at_1 value: 28.828166666666664 - type: precision_at_10 value: 6.615749999999999 - type: precision_at_100 value: 1.0848333333333333 - type: precision_at_1000 value: 0.1484166666666667 - type: precision_at_3 value: 15.347833333333332 - type: precision_at_5 value: 10.848916666666666 - type: recall_at_1 value: 24.266916666666663 - type: recall_at_10 value: 48.73458333333333 - type: recall_at_100 value: 71.56341666666667 - type: recall_at_1000 value: 88.63091666666668 - type: recall_at_3 value: 36.31208333333333 - type: recall_at_5 value: 41.55633333333333 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.497 - type: map_at_10 value: 30.249 - type: map_at_100 value: 30.947000000000003 - type: map_at_1000 value: 31.049 - type: map_at_3 value: 28.188000000000002 - type: map_at_5 value: 29.332 - type: mrr_at_1 value: 26.687 - type: mrr_at_10 value: 33.182 - type: mrr_at_100 value: 33.794999999999995 - type: mrr_at_1000 value: 33.873 - type: mrr_at_3 value: 31.263 - type: mrr_at_5 value: 32.428000000000004 - type: ndcg_at_1 value: 26.687 - type: ndcg_at_10 value: 34.252 - type: ndcg_at_100 value: 38.083 - type: ndcg_at_1000 value: 40.682 - type: ndcg_at_3 value: 30.464999999999996 - type: ndcg_at_5 value: 32.282 - type: precision_at_1 value: 26.687 - type: precision_at_10 value: 5.2909999999999995 - type: precision_at_100 value: 0.788 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 13.037 - type: precision_at_5 value: 9.049 - type: recall_at_1 value: 23.497 - type: recall_at_10 value: 43.813 - type: recall_at_100 value: 61.88399999999999 - type: recall_at_1000 value: 80.926 - type: recall_at_3 value: 33.332 - type: recall_at_5 value: 37.862 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.073 - type: map_at_10 value: 22.705000000000002 - type: map_at_100 value: 23.703 - type: map_at_1000 value: 23.833 - type: map_at_3 value: 20.593 - type: map_at_5 value: 21.7 - type: mrr_at_1 value: 19.683 - type: mrr_at_10 value: 26.39 - type: mrr_at_100 value: 27.264 - type: mrr_at_1000 value: 27.349 - type: mrr_at_3 value: 24.409 - type: mrr_at_5 value: 25.474000000000004 - type: ndcg_at_1 value: 19.683 - type: ndcg_at_10 value: 27.014 - type: ndcg_at_100 value: 31.948 - type: ndcg_at_1000 value: 35.125 - type: ndcg_at_3 value: 23.225 - type: ndcg_at_5 value: 24.866 - type: precision_at_1 value: 19.683 - type: precision_at_10 value: 4.948 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 10.943 - type: precision_at_5 value: 7.86 - type: recall_at_1 value: 16.073 - type: recall_at_10 value: 36.283 - type: recall_at_100 value: 58.745999999999995 - type: recall_at_1000 value: 81.711 - type: recall_at_3 value: 25.637 - type: recall_at_5 value: 29.919 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.776 - type: map_at_10 value: 33.317 - type: map_at_100 value: 34.437 - type: map_at_1000 value: 34.54 - type: map_at_3 value: 30.706 - type: map_at_5 value: 32.202999999999996 - type: mrr_at_1 value: 30.224 - type: mrr_at_10 value: 37.34 - type: mrr_at_100 value: 38.268 - type: mrr_at_1000 value: 38.335 - type: mrr_at_3 value: 35.075 - type: mrr_at_5 value: 36.348 - type: ndcg_at_1 value: 30.224 - type: ndcg_at_10 value: 38.083 - type: ndcg_at_100 value: 43.413000000000004 - type: ndcg_at_1000 value: 45.856 - type: ndcg_at_3 value: 33.437 - type: ndcg_at_5 value: 35.661 - type: precision_at_1 value: 30.224 - type: precision_at_10 value: 6.1850000000000005 - type: precision_at_100 value: 1.0030000000000001 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 14.646 - type: precision_at_5 value: 10.428999999999998 - type: recall_at_1 value: 25.776 - type: recall_at_10 value: 48.787000000000006 - type: recall_at_100 value: 72.04899999999999 - type: recall_at_1000 value: 89.339 - type: recall_at_3 value: 36.192 - type: recall_at_5 value: 41.665 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.156 - type: map_at_10 value: 30.886000000000003 - type: map_at_100 value: 32.551 - type: map_at_1000 value: 32.769 - type: map_at_3 value: 28.584 - type: map_at_5 value: 29.959999999999997 - type: mrr_at_1 value: 28.260999999999996 - type: mrr_at_10 value: 35.555 - type: mrr_at_100 value: 36.687 - type: mrr_at_1000 value: 36.742999999999995 - type: mrr_at_3 value: 33.531 - type: mrr_at_5 value: 34.717 - type: ndcg_at_1 value: 28.260999999999996 - type: ndcg_at_10 value: 36.036 - type: ndcg_at_100 value: 42.675000000000004 - type: ndcg_at_1000 value: 45.303 - type: ndcg_at_3 value: 32.449 - type: ndcg_at_5 value: 34.293 - type: precision_at_1 value: 28.260999999999996 - type: precision_at_10 value: 6.837999999999999 - type: precision_at_100 value: 1.4569999999999999 - type: precision_at_1000 value: 0.23500000000000001 - type: precision_at_3 value: 15.217 - type: precision_at_5 value: 11.028 - type: recall_at_1 value: 23.156 - type: recall_at_10 value: 45.251999999999995 - type: recall_at_100 value: 75.339 - type: recall_at_1000 value: 91.56 - type: recall_at_3 value: 34.701 - type: recall_at_5 value: 39.922999999999995 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 19.846 - type: map_at_10 value: 26.367 - type: map_at_100 value: 27.439999999999998 - type: map_at_1000 value: 27.552 - type: map_at_3 value: 24.006 - type: map_at_5 value: 25.230999999999998 - type: mrr_at_1 value: 21.257 - type: mrr_at_10 value: 28.071 - type: mrr_at_100 value: 29.037000000000003 - type: mrr_at_1000 value: 29.119 - type: mrr_at_3 value: 25.692999999999998 - type: mrr_at_5 value: 27.006000000000004 - type: ndcg_at_1 value: 21.257 - type: ndcg_at_10 value: 30.586000000000002 - type: ndcg_at_100 value: 35.949 - type: ndcg_at_1000 value: 38.728 - type: ndcg_at_3 value: 25.862000000000002 - type: ndcg_at_5 value: 27.967 - type: precision_at_1 value: 21.257 - type: precision_at_10 value: 4.861 - type: precision_at_100 value: 0.8130000000000001 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 10.906 - type: precision_at_5 value: 7.763000000000001 - type: recall_at_1 value: 19.846 - type: recall_at_10 value: 41.805 - type: recall_at_100 value: 66.89699999999999 - type: recall_at_1000 value: 87.401 - type: recall_at_3 value: 29.261 - type: recall_at_5 value: 34.227000000000004 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 10.333 - type: map_at_10 value: 17.14 - type: map_at_100 value: 18.878 - type: map_at_1000 value: 19.067 - type: map_at_3 value: 14.123 - type: map_at_5 value: 15.699 - type: mrr_at_1 value: 23.192 - type: mrr_at_10 value: 33.553 - type: mrr_at_100 value: 34.553 - type: mrr_at_1000 value: 34.603 - type: mrr_at_3 value: 29.848000000000003 - type: mrr_at_5 value: 32.18 - type: ndcg_at_1 value: 23.192 - type: ndcg_at_10 value: 24.707 - type: ndcg_at_100 value: 31.701 - type: ndcg_at_1000 value: 35.260999999999996 - type: ndcg_at_3 value: 19.492 - type: ndcg_at_5 value: 21.543 - type: precision_at_1 value: 23.192 - type: precision_at_10 value: 7.824000000000001 - type: precision_at_100 value: 1.52 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 14.180000000000001 - type: precision_at_5 value: 11.530999999999999 - type: recall_at_1 value: 10.333 - type: recall_at_10 value: 30.142999999999997 - type: recall_at_100 value: 54.298 - type: recall_at_1000 value: 74.337 - type: recall_at_3 value: 17.602999999999998 - type: recall_at_5 value: 22.938 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.03 - type: map_at_10 value: 17.345 - type: map_at_100 value: 23.462 - type: map_at_1000 value: 24.77 - type: map_at_3 value: 12.714 - type: map_at_5 value: 14.722 - type: mrr_at_1 value: 61.0 - type: mrr_at_10 value: 69.245 - type: mrr_at_100 value: 69.715 - type: mrr_at_1000 value: 69.719 - type: mrr_at_3 value: 67.583 - type: mrr_at_5 value: 68.521 - type: ndcg_at_1 value: 47.625 - type: ndcg_at_10 value: 35.973 - type: ndcg_at_100 value: 39.875 - type: ndcg_at_1000 value: 46.922000000000004 - type: ndcg_at_3 value: 40.574 - type: ndcg_at_5 value: 38.18 - type: precision_at_1 value: 61.0 - type: precision_at_10 value: 29.049999999999997 - type: precision_at_100 value: 8.828 - type: precision_at_1000 value: 1.8290000000000002 - type: precision_at_3 value: 45.333 - type: precision_at_5 value: 37.9 - type: recall_at_1 value: 8.03 - type: recall_at_10 value: 22.334 - type: recall_at_100 value: 45.919 - type: recall_at_1000 value: 68.822 - type: recall_at_3 value: 14.038999999999998 - type: recall_at_5 value: 17.118 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 44.714999999999996 - type: f1 value: 39.83929362259356 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 52.242999999999995 - type: map_at_10 value: 64.087 - type: map_at_100 value: 64.549 - type: map_at_1000 value: 64.567 - type: map_at_3 value: 61.667 - type: map_at_5 value: 63.266 - type: mrr_at_1 value: 56.271 - type: mrr_at_10 value: 68.146 - type: mrr_at_100 value: 68.524 - type: mrr_at_1000 value: 68.53200000000001 - type: mrr_at_3 value: 65.869 - type: mrr_at_5 value: 67.37100000000001 - type: ndcg_at_1 value: 56.271 - type: ndcg_at_10 value: 70.109 - type: ndcg_at_100 value: 72.09 - type: ndcg_at_1000 value: 72.479 - type: ndcg_at_3 value: 65.559 - type: ndcg_at_5 value: 68.242 - type: precision_at_1 value: 56.271 - type: precision_at_10 value: 9.286999999999999 - type: precision_at_100 value: 1.039 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 26.308 - type: precision_at_5 value: 17.291 - type: recall_at_1 value: 52.242999999999995 - type: recall_at_10 value: 84.71 - type: recall_at_100 value: 93.309 - type: recall_at_1000 value: 96.013 - type: recall_at_3 value: 72.554 - type: recall_at_5 value: 79.069 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 14.346 - type: map_at_10 value: 24.552 - type: map_at_100 value: 26.161 - type: map_at_1000 value: 26.345000000000002 - type: map_at_3 value: 21.208 - type: map_at_5 value: 22.959 - type: mrr_at_1 value: 29.166999999999998 - type: mrr_at_10 value: 38.182 - type: mrr_at_100 value: 39.22 - type: mrr_at_1000 value: 39.263 - type: mrr_at_3 value: 35.983 - type: mrr_at_5 value: 37.14 - type: ndcg_at_1 value: 29.166999999999998 - type: ndcg_at_10 value: 31.421 - type: ndcg_at_100 value: 38.129999999999995 - type: ndcg_at_1000 value: 41.569 - type: ndcg_at_3 value: 28.172000000000004 - type: ndcg_at_5 value: 29.029 - type: precision_at_1 value: 29.166999999999998 - type: precision_at_10 value: 8.997 - type: precision_at_100 value: 1.5709999999999997 - type: precision_at_1000 value: 0.22 - type: precision_at_3 value: 19.187 - type: precision_at_5 value: 13.980999999999998 - type: recall_at_1 value: 14.346 - type: recall_at_10 value: 37.963 - type: recall_at_100 value: 63.43299999999999 - type: recall_at_1000 value: 84.057 - type: recall_at_3 value: 26.119999999999997 - type: recall_at_5 value: 30.988 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 33.059 - type: map_at_10 value: 46.421 - type: map_at_100 value: 47.323 - type: map_at_1000 value: 47.403 - type: map_at_3 value: 43.553999999999995 - type: map_at_5 value: 45.283 - type: mrr_at_1 value: 66.117 - type: mrr_at_10 value: 73.10900000000001 - type: mrr_at_100 value: 73.444 - type: mrr_at_1000 value: 73.46000000000001 - type: mrr_at_3 value: 71.70400000000001 - type: mrr_at_5 value: 72.58099999999999 - type: ndcg_at_1 value: 66.117 - type: ndcg_at_10 value: 55.696999999999996 - type: ndcg_at_100 value: 59.167 - type: ndcg_at_1000 value: 60.809000000000005 - type: ndcg_at_3 value: 51.243 - type: ndcg_at_5 value: 53.627 - type: precision_at_1 value: 66.117 - type: precision_at_10 value: 11.538 - type: precision_at_100 value: 1.429 - type: precision_at_1000 value: 0.165 - type: precision_at_3 value: 31.861 - type: precision_at_5 value: 20.997 - type: recall_at_1 value: 33.059 - type: recall_at_10 value: 57.691 - type: recall_at_100 value: 71.458 - type: recall_at_1000 value: 82.35 - type: recall_at_3 value: 47.792 - type: recall_at_5 value: 52.492000000000004 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 80.544 - type: ap value: 74.69592367984956 - type: f1 value: 80.51138138449883 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 17.095 - type: map_at_10 value: 28.038999999999998 - type: map_at_100 value: 29.246 - type: map_at_1000 value: 29.311 - type: map_at_3 value: 24.253 - type: map_at_5 value: 26.442 - type: mrr_at_1 value: 17.535999999999998 - type: mrr_at_10 value: 28.53 - type: mrr_at_100 value: 29.697000000000003 - type: mrr_at_1000 value: 29.755 - type: mrr_at_3 value: 24.779999999999998 - type: mrr_at_5 value: 26.942 - type: ndcg_at_1 value: 17.549999999999997 - type: ndcg_at_10 value: 34.514 - type: ndcg_at_100 value: 40.497 - type: ndcg_at_1000 value: 42.17 - type: ndcg_at_3 value: 26.764 - type: ndcg_at_5 value: 30.678 - type: precision_at_1 value: 17.549999999999997 - type: precision_at_10 value: 5.692 - type: precision_at_100 value: 0.8699999999999999 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 11.562 - type: precision_at_5 value: 8.917 - type: recall_at_1 value: 17.095 - type: recall_at_10 value: 54.642 - type: recall_at_100 value: 82.652 - type: recall_at_1000 value: 95.555 - type: recall_at_3 value: 33.504 - type: recall_at_5 value: 42.925000000000004 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.75558595531236 - type: f1 value: 91.25979279648296 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 69.90424076607387 - type: f1 value: 52.067408707562244 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.13449899125757 - type: f1 value: 67.62456762910598 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.862138533961 - type: f1 value: 74.66457222091381 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 34.10761942610792 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.673172170578408 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.058704977250315 - type: mrr value: 33.24327760839221 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.163 - type: map_at_10 value: 11.652999999999999 - type: map_at_100 value: 14.849 - type: map_at_1000 value: 16.253999999999998 - type: map_at_3 value: 8.616999999999999 - type: map_at_5 value: 10.100000000000001 - type: mrr_at_1 value: 44.272 - type: mrr_at_10 value: 52.25 - type: mrr_at_100 value: 52.761 - type: mrr_at_1000 value: 52.811 - type: mrr_at_3 value: 50.31 - type: mrr_at_5 value: 51.347 - type: ndcg_at_1 value: 42.105 - type: ndcg_at_10 value: 32.044 - type: ndcg_at_100 value: 29.763 - type: ndcg_at_1000 value: 38.585 - type: ndcg_at_3 value: 36.868 - type: ndcg_at_5 value: 35.154999999999994 - type: precision_at_1 value: 43.653 - type: precision_at_10 value: 23.622 - type: precision_at_100 value: 7.7490000000000006 - type: precision_at_1000 value: 2.054 - type: precision_at_3 value: 34.262 - type: precision_at_5 value: 30.154999999999998 - type: recall_at_1 value: 5.163 - type: recall_at_10 value: 15.478 - type: recall_at_100 value: 30.424 - type: recall_at_1000 value: 62.67 - type: recall_at_3 value: 9.615 - type: recall_at_5 value: 12.369 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 21.618000000000002 - type: map_at_10 value: 35.465 - type: map_at_100 value: 36.712 - type: map_at_1000 value: 36.757 - type: map_at_3 value: 31.189 - type: map_at_5 value: 33.537 - type: mrr_at_1 value: 24.305 - type: mrr_at_10 value: 37.653 - type: mrr_at_100 value: 38.662 - type: mrr_at_1000 value: 38.694 - type: mrr_at_3 value: 33.889 - type: mrr_at_5 value: 35.979 - type: ndcg_at_1 value: 24.305 - type: ndcg_at_10 value: 43.028 - type: ndcg_at_100 value: 48.653999999999996 - type: ndcg_at_1000 value: 49.733 - type: ndcg_at_3 value: 34.768 - type: ndcg_at_5 value: 38.753 - type: precision_at_1 value: 24.305 - type: precision_at_10 value: 7.59 - type: precision_at_100 value: 1.076 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 16.271 - type: precision_at_5 value: 12.068 - type: recall_at_1 value: 21.618000000000002 - type: recall_at_10 value: 63.977 - type: recall_at_100 value: 89.03999999999999 - type: recall_at_1000 value: 97.10600000000001 - type: recall_at_3 value: 42.422 - type: recall_at_5 value: 51.629000000000005 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 69.405 - type: map_at_10 value: 83.05 - type: map_at_100 value: 83.684 - type: map_at_1000 value: 83.70400000000001 - type: map_at_3 value: 80.08800000000001 - type: map_at_5 value: 81.937 - type: mrr_at_1 value: 79.85 - type: mrr_at_10 value: 86.369 - type: mrr_at_100 value: 86.48599999999999 - type: mrr_at_1000 value: 86.48700000000001 - type: mrr_at_3 value: 85.315 - type: mrr_at_5 value: 86.044 - type: ndcg_at_1 value: 79.86999999999999 - type: ndcg_at_10 value: 87.04499999999999 - type: ndcg_at_100 value: 88.373 - type: ndcg_at_1000 value: 88.531 - type: ndcg_at_3 value: 84.04 - type: ndcg_at_5 value: 85.684 - type: precision_at_1 value: 79.86999999999999 - type: precision_at_10 value: 13.183 - type: precision_at_100 value: 1.51 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.67 - type: precision_at_5 value: 24.12 - type: recall_at_1 value: 69.405 - type: recall_at_10 value: 94.634 - type: recall_at_100 value: 99.214 - type: recall_at_1000 value: 99.958 - type: recall_at_3 value: 85.992 - type: recall_at_5 value: 90.656 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 50.191676323145465 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 56.4874020363744 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.228 - type: map_at_10 value: 11.245 - type: map_at_100 value: 13.353000000000002 - type: map_at_1000 value: 13.665 - type: map_at_3 value: 7.779999999999999 - type: map_at_5 value: 9.405 - type: mrr_at_1 value: 20.9 - type: mrr_at_10 value: 31.657999999999998 - type: mrr_at_100 value: 32.769999999999996 - type: mrr_at_1000 value: 32.833 - type: mrr_at_3 value: 28.333000000000002 - type: mrr_at_5 value: 30.043 - type: ndcg_at_1 value: 20.9 - type: ndcg_at_10 value: 19.073 - type: ndcg_at_100 value: 27.055 - type: ndcg_at_1000 value: 32.641 - type: ndcg_at_3 value: 17.483999999999998 - type: ndcg_at_5 value: 15.42 - type: precision_at_1 value: 20.9 - type: precision_at_10 value: 10.17 - type: precision_at_100 value: 2.162 - type: precision_at_1000 value: 0.35100000000000003 - type: precision_at_3 value: 16.467000000000002 - type: precision_at_5 value: 13.68 - type: recall_at_1 value: 4.228 - type: recall_at_10 value: 20.573 - type: recall_at_100 value: 43.887 - type: recall_at_1000 value: 71.22 - type: recall_at_3 value: 10.023 - type: recall_at_5 value: 13.873 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 82.77965135067481 - type: cos_sim_spearman value: 75.85121335808076 - type: euclidean_pearson value: 80.09115175262697 - type: euclidean_spearman value: 75.72249155647123 - type: manhattan_pearson value: 79.89723577351782 - type: manhattan_spearman value: 75.49855259442387 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 80.46084116030949 - type: cos_sim_spearman value: 72.57579204392951 - type: euclidean_pearson value: 76.39020830763684 - type: euclidean_spearman value: 72.3718627025895 - type: manhattan_pearson value: 76.6148833027359 - type: manhattan_spearman value: 72.57570008442319 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 80.43678068337017 - type: cos_sim_spearman value: 82.38941154076062 - type: euclidean_pearson value: 81.59260573633661 - type: euclidean_spearman value: 82.31144262574114 - type: manhattan_pearson value: 81.43266909137056 - type: manhattan_spearman value: 82.14704293004861 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 80.73713431763163 - type: cos_sim_spearman value: 77.97860512809388 - type: euclidean_pearson value: 80.35755041527027 - type: euclidean_spearman value: 78.021703511412 - type: manhattan_pearson value: 80.24440317109162 - type: manhattan_spearman value: 77.93165415697575 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 85.15111852351204 - type: cos_sim_spearman value: 86.54032447238258 - type: euclidean_pearson value: 86.14157021537433 - type: euclidean_spearman value: 86.67537291929713 - type: manhattan_pearson value: 86.081041854808 - type: manhattan_spearman value: 86.61561701560558 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 81.34532445104026 - type: cos_sim_spearman value: 83.31325001474116 - type: euclidean_pearson value: 82.81892375201032 - type: euclidean_spearman value: 83.4521695148055 - type: manhattan_pearson value: 82.72503790526163 - type: manhattan_spearman value: 83.37833652941349 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.25463453839801 - type: cos_sim_spearman value: 88.27655263515948 - type: euclidean_pearson value: 88.0248334411439 - type: euclidean_spearman value: 88.18141448876868 - type: manhattan_pearson value: 87.8080451127279 - type: manhattan_spearman value: 88.01028114423058 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.57551045355218 - type: cos_sim_spearman value: 66.67614095126629 - type: euclidean_pearson value: 66.0787243112528 - type: euclidean_spearman value: 66.83660560636939 - type: manhattan_pearson value: 66.74684019662031 - type: manhattan_spearman value: 67.11761598074368 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 83.70881496766829 - type: cos_sim_spearman value: 84.37803542941634 - type: euclidean_pearson value: 84.84501245857096 - type: euclidean_spearman value: 84.47088079741476 - type: manhattan_pearson value: 84.77244090794765 - type: manhattan_spearman value: 84.43307343706205 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 81.53946254759089 - type: mrr value: 94.68259953554072 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 51.817 - type: map_at_10 value: 62.339999999999996 - type: map_at_100 value: 62.88 - type: map_at_1000 value: 62.909000000000006 - type: map_at_3 value: 59.004 - type: map_at_5 value: 60.906000000000006 - type: mrr_at_1 value: 54.333 - type: mrr_at_10 value: 63.649 - type: mrr_at_100 value: 64.01 - type: mrr_at_1000 value: 64.039 - type: mrr_at_3 value: 61.056 - type: mrr_at_5 value: 62.639 - type: ndcg_at_1 value: 54.333 - type: ndcg_at_10 value: 67.509 - type: ndcg_at_100 value: 69.69999999999999 - type: ndcg_at_1000 value: 70.613 - type: ndcg_at_3 value: 61.729 - type: ndcg_at_5 value: 64.696 - type: precision_at_1 value: 54.333 - type: precision_at_10 value: 9.2 - type: precision_at_100 value: 1.043 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 24.0 - type: precision_at_5 value: 16.2 - type: recall_at_1 value: 51.817 - type: recall_at_10 value: 82.056 - type: recall_at_100 value: 91.667 - type: recall_at_1000 value: 99.0 - type: recall_at_3 value: 66.717 - type: recall_at_5 value: 74.17200000000001 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.82475247524752 - type: cos_sim_ap value: 95.4781199603258 - type: cos_sim_f1 value: 91.16186693147964 - type: cos_sim_precision value: 90.53254437869822 - type: cos_sim_recall value: 91.8 - type: dot_accuracy value: 99.75049504950495 - type: dot_ap value: 93.05183539809457 - type: dot_f1 value: 87.31117824773412 - type: dot_precision value: 87.93103448275862 - type: dot_recall value: 86.7 - type: euclidean_accuracy value: 99.82475247524752 - type: euclidean_ap value: 95.38547978154382 - type: euclidean_f1 value: 91.16325511732403 - type: euclidean_precision value: 91.02691924227318 - type: euclidean_recall value: 91.3 - type: manhattan_accuracy value: 99.82574257425742 - type: manhattan_ap value: 95.47237521890308 - type: manhattan_f1 value: 91.27849355797821 - type: manhattan_precision value: 90.47151277013754 - type: manhattan_recall value: 92.10000000000001 - type: max_accuracy value: 99.82574257425742 - type: max_ap value: 95.4781199603258 - type: max_f1 value: 91.27849355797821 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 57.542169376331245 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 35.74399302634387 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.65076347632749 - type: mrr value: 50.418099057804945 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.73997756592847 - type: cos_sim_spearman value: 29.465208011593308 - type: dot_pearson value: 24.83735342474541 - type: dot_spearman value: 26.005180528584855 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.208 - type: map_at_10 value: 1.434 - type: map_at_100 value: 7.829 - type: map_at_1000 value: 19.807 - type: map_at_3 value: 0.549 - type: map_at_5 value: 0.8330000000000001 - type: mrr_at_1 value: 78.0 - type: mrr_at_10 value: 85.35199999999999 - type: mrr_at_100 value: 85.673 - type: mrr_at_1000 value: 85.673 - type: mrr_at_3 value: 84.667 - type: mrr_at_5 value: 85.06700000000001 - type: ndcg_at_1 value: 72.0 - type: ndcg_at_10 value: 59.214999999999996 - type: ndcg_at_100 value: 44.681 - type: ndcg_at_1000 value: 43.035000000000004 - type: ndcg_at_3 value: 66.53099999999999 - type: ndcg_at_5 value: 63.23 - type: precision_at_1 value: 78.0 - type: precision_at_10 value: 62.4 - type: precision_at_100 value: 45.76 - type: precision_at_1000 value: 19.05 - type: precision_at_3 value: 71.333 - type: precision_at_5 value: 67.2 - type: recall_at_1 value: 0.208 - type: recall_at_10 value: 1.6580000000000001 - type: recall_at_100 value: 11.324 - type: recall_at_1000 value: 41.537 - type: recall_at_3 value: 0.579 - type: recall_at_5 value: 0.8959999999999999 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.442 - type: map_at_10 value: 8.863 - type: map_at_100 value: 14.606 - type: map_at_1000 value: 16.258 - type: map_at_3 value: 4.396 - type: map_at_5 value: 6.199000000000001 - type: mrr_at_1 value: 30.612000000000002 - type: mrr_at_10 value: 43.492 - type: mrr_at_100 value: 44.557 - type: mrr_at_1000 value: 44.557 - type: mrr_at_3 value: 40.816 - type: mrr_at_5 value: 42.143 - type: ndcg_at_1 value: 25.509999999999998 - type: ndcg_at_10 value: 22.076 - type: ndcg_at_100 value: 34.098 - type: ndcg_at_1000 value: 46.265 - type: ndcg_at_3 value: 24.19 - type: ndcg_at_5 value: 23.474 - type: precision_at_1 value: 30.612000000000002 - type: precision_at_10 value: 19.796 - type: precision_at_100 value: 7.286 - type: precision_at_1000 value: 1.5310000000000001 - type: precision_at_3 value: 25.85 - type: precision_at_5 value: 24.490000000000002 - type: recall_at_1 value: 2.442 - type: recall_at_10 value: 15.012 - type: recall_at_100 value: 45.865 - type: recall_at_1000 value: 82.958 - type: recall_at_3 value: 5.731 - type: recall_at_5 value: 9.301 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.974 - type: ap value: 14.534996211286682 - type: f1 value: 54.785946183399005 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 58.56819468024901 - type: f1 value: 58.92391487111204 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 43.273202335218194 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.37742146986946 - type: cos_sim_ap value: 68.1684129575579 - type: cos_sim_f1 value: 64.93475108748189 - type: cos_sim_precision value: 59.89745876058849 - type: cos_sim_recall value: 70.89709762532982 - type: dot_accuracy value: 80.49710913750968 - type: dot_ap value: 54.699790073944186 - type: dot_f1 value: 54.45130013221684 - type: dot_precision value: 46.74612183125236 - type: dot_recall value: 65.19788918205805 - type: euclidean_accuracy value: 84.5085533766466 - type: euclidean_ap value: 68.38835695236224 - type: euclidean_f1 value: 65.3391121002694 - type: euclidean_precision value: 58.75289656625237 - type: euclidean_recall value: 73.58839050131925 - type: manhattan_accuracy value: 84.40126363473803 - type: manhattan_ap value: 68.09539181555348 - type: manhattan_f1 value: 64.99028182701653 - type: manhattan_precision value: 60.22062134173795 - type: manhattan_recall value: 70.58047493403694 - type: max_accuracy value: 84.5085533766466 - type: max_ap value: 68.38835695236224 - type: max_f1 value: 65.3391121002694 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.34167733923235 - type: cos_sim_ap value: 84.84136381147736 - type: cos_sim_f1 value: 77.01434980904001 - type: cos_sim_precision value: 74.27937915742794 - type: cos_sim_recall value: 79.95842315983985 - type: dot_accuracy value: 85.06422944075756 - type: dot_ap value: 76.49446747522325 - type: dot_f1 value: 71.11606520830432 - type: dot_precision value: 64.93638676844785 - type: dot_recall value: 78.59562673236834 - type: euclidean_accuracy value: 88.45810532852097 - type: euclidean_ap value: 84.91526721863501 - type: euclidean_f1 value: 77.04399001750662 - type: euclidean_precision value: 74.62298867162133 - type: euclidean_recall value: 79.62734832152756 - type: manhattan_accuracy value: 88.46004579500912 - type: manhattan_ap value: 84.81590026238194 - type: manhattan_f1 value: 76.97804626491822 - type: manhattan_precision value: 73.79237288135593 - type: manhattan_recall value: 80.45118570988605 - type: max_accuracy value: 88.46004579500912 - type: max_ap value: 84.91526721863501 - type: max_f1 value: 77.04399001750662 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb --- # {gte-tiny} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is distilled from `thenlper/gte-small`, with comparable (slightly worse) performance at around half the size. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
NeverSleep/Mistral-11B-SynthIAirOmniMix-GGUF
NeverSleep
2023-10-14T15:43:18Z
11
0
null
[ "gguf", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2023-10-14T15:23:48Z
--- license: cc-by-nc-4.0 --- Replaced Zephyr by Airoboros 2.2 and OpenOrca by SynthIA in the mix, the reason why is to see if using merged Mistral models using all the same prompt format would be a better step or not. ## Description This repo contains quantized files of Mistral-11B-SynthIAirOmniMix. ## Model used - [SynthIA-7B-v1.5](https://huggingface.co/migtissera/SynthIA-7B-v1.5) - [Mistral-7B-v0.1-Open-Platypus](https://huggingface.co/akjindal53244/Mistral-7B-v0.1-Open-Platypus) - [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B) - [airoboros-mistral2.2-7b](https://huggingface.co/teknium/airoboros-mistral2.2-7b) ## Prompt template 3 out of 4 models use the same prompting format in this merge. The best one should be this one, since Zephyr and OpenOrca is out of the merge: ``` (SYSTEM: {context}) - Not mandatory USER: {prompt} ASSISTANT: ``` But this one (maybe) work too: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## The secret sauce Mistral-11B-SynthIAOpenPlatypus : ``` slices: - sources: - model: "/content/drive/MyDrive/SynthIA-7B-v1.5-bf16" layer_range: [0, 24] - sources: - model: akjindal53244/Mistral-7B-v0.1-Open-Platypus layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Mistral-11B-CC-Airo : ``` slices: - sources: - model: "/content/drive/MyDrive/CC-v1.1-7B-bf16" layer_range: [0, 24] - sources: - model: "/content/drive/MyDrive/Mistral-7B-Airoboros-2.2-bf16" layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Mistral-11B-SynthIAirOmniMix : ``` slices: - sources: - model: Mistral-11B-SynthIAOpenPlatypus layer_range: [0, 48] - model: Mistral-11B-CC-Airo layer_range: [0, 48] merge_method: slerp base_model: Mistral-11B-OpenOrcaPlatypus parameters: t: - filter: lm_head value: [0.75] - filter: embed_tokens value: [0.75] - filter: self_attn value: [0.75, 0.25] - filter: mlp value: [0.25, 0.75] - filter: layernorm value: [0.5, 0.5] - filter: modelnorm value: [0.75] - value: 0.5 # fallback for rest of tensors dtype: bfloat16 ``` I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here. ## Some scoring I done myself ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/rnraBZz-I9CUD1GVNVF00.png) | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5410|± |0.0146| | | |acc_norm|0.5640|± |0.0145| |arc_easy | 0|acc |0.8228|± |0.0078| | | |acc_norm|0.8068|± |0.0081| |hellaswag | 0|acc |0.6274|± |0.0048| | | |acc_norm|0.8167|± |0.0039| |piqa | 0|acc |0.8052|± |0.0092| | | |acc_norm|0.8232|± |0.0089| |truthfulqa_mc| 1|mc1 |0.3905|± |0.0171| | | |mc2 |0.5592|± |0.0155| |winogrande | 0|acc |0.7364|± |0.0124| ## Others Special thanks to Sushi, [Henky](https://github.com/KoboldAI/KoboldAI-Client) for the machine he give me for big task, and [Charles Goddard](https://github.com/cg123) for his amazing tool. If you want to support me, you can [here](https://ko-fi.com/undiai).
NeverSleep/Mistral-11B-AirOmniMix-GGUF
NeverSleep
2023-10-14T15:36:47Z
26
0
null
[ "gguf", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2023-10-14T04:00:46Z
--- license: cc-by-nc-4.0 --- Replaced Zephyr by Airoboros 2.2 in the mix. ## Description This repo contains quantized files of Mistral-11B-AirOmniMix. ## Model used - [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) - [Mistral-7B-v0.1-Open-Platypus](https://huggingface.co/akjindal53244/Mistral-7B-v0.1-Open-Platypus) - [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B) - [airoboros-mistral2.2-7b](https://huggingface.co/teknium/airoboros-mistral2.2-7b) ## Prompt template The best one after further testing is this one, since Zephyr is out of the merge: ``` USER: <prompt> ASSISTANT: ``` But this one work too: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` Or use any prompting system from one of the 4 source model, should work. ## The secret sauce Mistral-11B-OpenOrcaPlatypus : ``` slices: - sources: - model: Open-Orca/Mistral-7B-OpenOrca layer_range: [0, 24] - sources: - model: akjindal53244/Mistral-7B-v0.1-Open-Platypus layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Mistral-11B-CC-Airo : ``` slices: - sources: - model: "/content/drive/MyDrive/CC-v1.1-7B-bf16" layer_range: [0, 24] - sources: - model: "/content/drive/MyDrive/Mistral-7B-Airoboros-2.2-bf16" layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Mistral-11B-AirOmniMix : ``` slices: - sources: - model: Mistral-11B-OpenOrcaPlatypus layer_range: [0, 48] - model: Mistral-11B-CC-Airo layer_range: [0, 48] merge_method: slerp base_model: Mistral-11B-OpenOrcaPlatypus parameters: t: - filter: lm_head value: [0.75] - filter: embed_tokens value: [0.75] - filter: self_attn value: [0.75, 0.25] - filter: mlp value: [0.25, 0.75] - filter: layernorm value: [0.5, 0.5] - filter: modelnorm value: [0.75] - value: 0.5 # fallback for rest of tensors dtype: bfloat16 ``` I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here. ## Some scoring I done myself hf-causal-experimental (pretrained=/content/drive/MyDrive/Mistral-11B-AirOmniMix), limit: None, provide_description: False, num_fewshot: 0, batch_size: 4 | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5452|± |0.0146| | | |acc_norm|0.5836|± |0.0144| |arc_easy | 0|acc |0.8321|± |0.0077| | | |acc_norm|0.8119|± |0.0080| |hellaswag | 0|acc |0.6381|± |0.0048| | | |acc_norm|0.8250|± |0.0038| |piqa | 0|acc |0.8096|± |0.0092| | | |acc_norm|0.8243|± |0.0089| |truthfulqa_mc| 1|mc1 |0.3941|± |0.0171| | | |mc2 |0.5606|± |0.0155| |winogrande | 0|acc |0.7395|± |0.0123| ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/rnraBZz-I9CUD1GVNVF00.png) ## Others Special thanks to Sushi, [Henky](https://github.com/KoboldAI/KoboldAI-Client) for the machine he give me for big task, and [Charles Goddard](https://github.com/cg123) for his amazing tool. If you want to support me, you can [here](https://ko-fi.com/undiai).
CreatorPhan/Bloomz_lora_answer
CreatorPhan
2023-10-14T15:35:02Z
3
0
peft
[ "peft", "tensorboard", "region:us" ]
null
2023-10-12T04:48:33Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0
sungkwangjoong/xlm-roberta-base-finetuned-panx-all
sungkwangjoong
2023-10-14T15:34:54Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-10-14T15:29:10Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1763 - F1: 0.8466 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3074 | 1.0 | 835 | 0.1948 | 0.8087 | | 0.1625 | 2.0 | 1670 | 0.1672 | 0.8350 | | 0.1064 | 3.0 | 2505 | 0.1763 | 0.8466 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
sungkwangjoong/xlm-roberta-base-finetuned-panx-en
sungkwangjoong
2023-10-14T15:28:34Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-10-14T15:26:33Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.en split: validation args: PAN-X.en metrics: - name: F1 type: f1 value: 0.7014590347923682 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3964 - F1: 0.7015 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1219 | 1.0 | 50 | 0.6235 | 0.4926 | | 0.508 | 2.0 | 100 | 0.4043 | 0.6909 | | 0.3484 | 3.0 | 150 | 0.3964 | 0.7015 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
adithyabhaskar/flan-t5-xl-t2s
adithyabhaskar
2023-10-14T15:27:00Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-10-12T20:31:06Z
--- tags: - generated_from_trainer model-index: - name: flan-t5-xl-t2s results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-xl-t2s ## Model description This FLAN-T5-XL model converts a given English question, conditioned on a DB schema, to the corresponding input query. Input format: ``` semantic parse: <question> | <db_id> | <schema_without_content> ``` Where `<schema_without_content>` is of the form `<table1>: <col1>, <col2>, ... | <table2>: ...`. The output format is ``` <db_id> | <query> ``` This model is one of the baselines (FLAN-T5-XL) we compared to in our paper.
LoftQ/bart-large-bit2-rank8
LoftQ
2023-10-14T15:14:16Z
0
0
peft
[ "peft", "pytorch", "bart", "arxiv:1910.09700", "base_model:facebook/bart-large", "base_model:adapter:facebook/bart-large", "region:us" ]
null
2023-10-14T15:12:35Z
--- library_name: peft base_model: facebook/bart-large --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
LoftQ/bart-large-bit2-rank32
LoftQ
2023-10-14T15:04:58Z
3
0
peft
[ "peft", "pytorch", "bart", "arxiv:1910.09700", "base_model:facebook/bart-large", "base_model:adapter:facebook/bart-large", "region:us" ]
null
2023-10-14T15:00:21Z
--- library_name: peft base_model: facebook/bart-large --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
tt1717/ppo-LunarLander-v2-scratch
tt1717
2023-10-14T15:00:47Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T15:00:41Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -153.09 +/- 78.81 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'T-T1717/ppo-LunarLander-v2-scratch' 'batch_size': 512 'minibatch_size': 128} ```
pfunk/PongNoFrameskip-v4-DDQPN_DDQN-seed3
pfunk
2023-10-14T14:59:15Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "PongNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T14:59:08Z
--- tags: - PongNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQPN_freq results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PongNoFrameskip-v4 type: PongNoFrameskip-v4 metrics: - type: mean_reward value: 18.53 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **DQPN_freq** Agent Playing **PongNoFrameskip-v4** This is a trained model of a DQPN_freq agent playing PongNoFrameskip-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DDQPN_DDQN.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DDQPN_DDQN]" python -m cleanrl_utils.enjoy --exp-name DDQPN_DDQN --env-id PongNoFrameskip-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DDQPN_DDQN-seed3/raw/main/dqpn_freq_atari.py curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DDQPN_DDQN-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DDQPN_DDQN-seed3/raw/main/poetry.lock poetry install --all-extras python dqpn_freq_atari.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DDQPN_DDQN --target-network-frequency 1000 --policy-network-frequency 1 --seed 3 --double-learning ``` # Hyperparameters ```python {'alg_type': 'dqpn_freq_atari.py', 'batch_size': 32, 'buffer_size': 1000000, 'capture_video': True, 'cuda': True, 'double_learning': True, 'end_e': 0.05, 'env_id': 'PongNoFrameskip-v4', 'exp_name': 'DDQPN_DDQN', 'exploration_fraction': 0.2, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 10000, 'max_gradient_norm': inf, 'policy_network_frequency': 1, 'policy_tau': 1.0, 'save_model': True, 'seed': 3, 'start_e': 1.0, 'target_network_frequency': 1000, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': True, 'train_frequency': 1, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
LoftQ/bart-large-bit4-rank8
LoftQ
2023-10-14T14:58:39Z
0
0
peft
[ "peft", "pytorch", "bart", "arxiv:1910.09700", "base_model:facebook/bart-large", "base_model:adapter:facebook/bart-large", "region:us" ]
null
2023-10-14T14:56:28Z
--- library_name: peft base_model: facebook/bart-large --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
shyam-incedoinc/my_awesome_eli5_mlm_model
shyam-incedoinc
2023-10-14T14:52:35Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-10-14T14:52:03Z
--- license: apache-2.0 base_model: distilroberta-base tags: - generated_from_trainer model-index: - name: my_awesome_eli5_mlm_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_mlm_model This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8642 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 64 | 1.9911 | | No log | 2.0 | 128 | 2.0413 | | No log | 3.0 | 192 | 1.8887 | | No log | 4.0 | 256 | 1.8567 | | No log | 5.0 | 320 | 1.9673 | | No log | 6.0 | 384 | 1.9546 | | No log | 7.0 | 448 | 1.8128 | | 2.0263 | 8.0 | 512 | 1.9133 | | 2.0263 | 9.0 | 576 | 1.9318 | | 2.0263 | 10.0 | 640 | 1.8511 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
LoftQ/bart-large-bit4-rank32
LoftQ
2023-10-14T14:51:35Z
1
0
peft
[ "peft", "pytorch", "bart", "arxiv:1910.09700", "base_model:facebook/bart-large", "base_model:adapter:facebook/bart-large", "region:us" ]
null
2023-10-14T14:49:41Z
--- library_name: peft base_model: facebook/bart-large --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
SakshiRathi77/wav2vec2_xlsr_300m
SakshiRathi77
2023-10-14T14:44:27Z
12
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_15_0", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-10-14T08:30:35Z
--- license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer metrics: - wer - cer model-index: - name: wav2vec2-large-xls-r-300m-hi results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 15 type: mozilla-foundation/common_voice_15_0 args: hi metrics: - name: Test WER type: wer value: 29.34 - name: Test CER type: cer value: 7.86 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: hi metrics: - name: Test WER type: wer value: 52.09 - name: Test CER type: cer value: 17.90 datasets: - mozilla-foundation/common_voice_15_0 language: - hi library_name: transformers pipeline_tag: automatic-speech-recognition --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hi This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3611 - Wer: 29.92% - Cer: 7.86% View the results on Kaggle Notebook: https://www.kaggle.com/code/kingabzpro/wav2vec-2-eval ## Evaluation ```python import torch from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import librosa import unicodedata import re test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "hi", split="test") wer = load_metric("wer") cer = load_metric("cer") processor = Wav2Vec2Processor.from_pretrained("SakshiRathi77/wav2vec2_xlsr_300m") model = Wav2Vec2ForCTC.from_pretrained("SakshiRathi77/wav2vec2_xlsr_300m") model.to("cuda") # Preprocessing the datasets. def speech_file_to_array_fn(batch): chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\'\|\&\–]' remove_en = '[A-Za-z]' batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"].lower()) batch["sentence"] = re.sub(remove_en, "", batch["sentence"]).lower() batch["sentence"] = unicodedata.normalize("NFKC", batch["sentence"]) speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids, skip_special_tokens=True) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) print("CER: {}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` ```bash WER: 52.09850206372026 CER: 17.902923538230883 ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 7.0431 | 19.05 | 300 | 3.4423 | 1.0 | 1.0 | | 2.3233 | 38.1 | 600 | 0.5965 | 0.4757 | 0.1329 | | 0.5676 | 57.14 | 900 | 0.3962 | 0.3584 | 0.0954 | | 0.3611 | 76.19 | 1200 | 0.3651 | 0.3190 | 0.0820 | | 0.2996 | 95.24 | 1500 | 0.3611 | 0.2992 | 0.0786 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
prometheus-eval/prometheus-13b-v1.0
prometheus-eval
2023-10-14T14:43:52Z
2,541
134
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text2text-generation", "en", "dataset:kaist-ai/Feedback-Collection", "arxiv:2310.08491", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-10-12T07:19:38Z
--- tags: - text2text-generation datasets: - kaist-ai/Feedback-Collection license: apache-2.0 language: - en pipeline_tag: text2text-generation library_name: transformers metrics: - pearsonr - spearmanr - accuracy --- ## Links for Reference - **Homepage:https://github.com/kaistAI/Prometheus** - **Repository:https://github.com/kaistAI/Prometheus** - **Paper:https://arxiv.org/abs/2310.08491** - **Point of Contact:[email protected]** # TL;DR Prometheus is an alternative of GPT-4 evaluation when doing fine-grained evaluation of an underlying LLM & a Reward model for Reinforcement Learning from Human Feedback (RLHF). ![plot](./finegrained_eval.JPG) Prometheus is a language model using [Llama-2-Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) as a base model and fine-tuned on 100K feedback within the [Feedback Collection](https://huggingface.co/datasets/kaist-ai/Feedback-Collection). Since it was fine-tuned on a large amount of feedback, it is specialized at evaluating long-form responses, outperforming GPT-3.5-Turbo, Llama-2-Chat 70B, and on par with GPT-4 on various benchmarks. Most importantly, this was possible since we appended 2 reference materials (reference answer, and customized score rubric). Prometheus is a cheap and powerful alternative to GPT-4 evaluation, which one could use to evaluate LLMs with customized criteria (e.g., Child readability, Cultural Sensitivity, Creativity). Also, it could be used as a reward model for Reinforcement Learning from Human Feedback (RLHF). # Model Details ## Model Description - **Model type:** Language model - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Related Models:** [All Prometheus Checkpoints](https://huggingface.co/models?search=kaist-ai/Prometheus) - **Resources for more information:** - [Research paper](https://arxiv.org/abs/2310.08491) - [GitHub Repo](https://github.com/kaistAI/Prometheus) Prometheus is trained with two different sizes (7B and 13B). You could check the 7B sized LM on [this page](https://huggingface.co/kaist-ai/prometheus-7b-v1.0). Also, check out our dataset as well on [this page](https://huggingface.co/datasets/kaist-ai/Feedback-Collection). ## Prompt Format Prometheus requires 4 components in the input: An instruction, a response to evaluate, a score rubric, and a reference answer. You could refer to the prompt format below. You should fill in the instruction, response, reference answer, criteria description, and score description for score in range of 1 to 5. ``` ###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: {instruction} ###Response to evaluate: {response} ###Reference Answer (Score 5): {reference_answer} ###Score Rubrics: [{criteria_description}] Score 1: {score1_description} Score 2: {score2_description} Score 3: {score3_description} Score 4: {score4_description} Score 5: {score5_description} ###Feedback: ``` After this, you should apply the conversation template of Llama-2-Chat (not applying it might lead to unexpected behaviors). You can find the conversation class at this [link](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py). ``` conv = get_conv_template("llama-2") conv.set_system_message("You are a fair evaluator language model.") conv.append_message(conv.roles[0], dialogs['instruction']) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() x = tokenizer(prompt,truncation=False) ``` As a result, a feedback and score decision will be generated, divided by a separating phrase ```[RESULT]``` ## License Feedback Collection and Prometheus is subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us. # Usage Find below some example scripts on how to use the model in `transformers`: ## Using the Pytorch model ### Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf") model = LlamaForCausalLM.from_pretrained("kaist-ai/Prometheus-13b-v1.0", device_map="auto") input_text = "###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: You are a well-known psychiatrist who has a reputation for being empathetic and understanding. A client comes to you saying they've had a really hard day at work. They describe their boss as being overly critical and not listening to their ideas, which has left them feeling frustrated and undervalued. They also mention feeling overwhelmed with the workload and a growing sense of loneliness as they are new to the city. How do you respond to them to make them feel heard and understood, as well as offering helpful advice? ###Response to evaluate: It's indeed challenging to deal with a difficult boss and to carry a large workload, especially when you are new to a city and haven't established a supportive network. I would suggest taking a step back to think about your situation and perhaps speak to your boss regarding the issues you're facing. On the other hand, dealing with a large workload can be managed by prioritizing tasks, and maybe you can discuss it with your superiors too. In regards to feeling lonely, you might want to explore activities or groups that align with your interests to meet new people. And also, remember to take time for yourself to relax and destress. I'm here to assist you as we navigate through these issues. ###Reference Answer (Score 5): I hear that you're dealing with quite a few challenging situations at once. It must be tough to have your ideas dismissed by your boss and to feel undervalued in your work environment. It's understandable that such experiences can lead to frustration and a feeling of being overwhelmed. It's important to acknowledge these feelings and not to dismiss them. Feeling heard and validated in our experiences can often be the first step towards resolving the challenges we face. So, I want you to know that your feelings are completely valid and it's okay to be frustrated and overwhelmed. In terms of dealing with your boss, have you considered seeking a private conversation with them to express your feelings? It's often beneficial to communicate what you're experiencing, using "I" statements to express how you feel when your ideas are not taken into consideration. This can lead to mutual understanding and possibly a change in behavior. About the workload, it might help to prioritize tasks and potentially delegate, if possible. Also, consider discussing your workload with your superiors. There might be a misunderstanding about what's manageable or they might have suggestions about how to handle the situation. On the personal front, feeling lonely, especially when you're new to a city, can be really hard. Seek out opportunities to meet new people, perhaps through hobbies, community activities, or online groups. It might take a bit of time, but gradually, you can build a network of friends and acquaintances. Remember, it's perfectly okay to have bad days and it's important to take care of your mental health. Consider incorporating activities into your daily routine that make you happy and help you unwind. This could be anything from reading, yoga, going for a walk, or even listening to your favorite music. Please know that you're not alone in this. I'm here to support you through this challenging time and together, we can work towards resolving these issues. ###Score Rubrics: [Is the model able to identify and react correctly to the emotional context of the user's input?] Score 1: The model utterly fails to grasp the user's emotional context and responds in an unfitting manner. Score 2: The model sporadically identifies the emotional context but frequently replies in a manner that doesn't match the user's emotional status. Score 3: The model typically identifies the emotional context and reacts suitably, but occasionally misreads or misjudges the user's feelings. Score 4: The model often identifies the emotional context and reacts suitably, with minor cases of misreading or misjudging. Score 5: The model flawlessly identifies the emotional context of the user's input and consistently responds in a considerate and empathetic manner. ###Feedback:" input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch from transformers import AutoTokenizer, LlamaForCausalLM tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf") model = LlamaForCausalLM.from_pretrained("kaist-ai/Prometheus-13b-v1.0", device_map="auto") input_text = "###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: You are a well-known psychiatrist who has a reputation for being empathetic and understanding. A client comes to you saying they've had a really hard day at work. They describe their boss as being overly critical and not listening to their ideas, which has left them feeling frustrated and undervalued. They also mention feeling overwhelmed with the workload and a growing sense of loneliness as they are new to the city. How do you respond to them to make them feel heard and understood, as well as offering helpful advice? ###Response to evaluate: It's indeed challenging to deal with a difficult boss and to carry a large workload, especially when you are new to a city and haven't established a supportive network. I would suggest taking a step back to think about your situation and perhaps speak to your boss regarding the issues you're facing. On the other hand, dealing with a large workload can be managed by prioritizing tasks, and maybe you can discuss it with your superiors too. In regards to feeling lonely, you might want to explore activities or groups that align with your interests to meet new people. And also, remember to take time for yourself to relax and destress. I'm here to assist you as we navigate through these issues. ###Reference Answer (Score 5): I hear that you're dealing with quite a few challenging situations at once. It must be tough to have your ideas dismissed by your boss and to feel undervalued in your work environment. It's understandable that such experiences can lead to frustration and a feeling of being overwhelmed. It's important to acknowledge these feelings and not to dismiss them. Feeling heard and validated in our experiences can often be the first step towards resolving the challenges we face. So, I want you to know that your feelings are completely valid and it's okay to be frustrated and overwhelmed. In terms of dealing with your boss, have you considered seeking a private conversation with them to express your feelings? It's often beneficial to communicate what you're experiencing, using "I" statements to express how you feel when your ideas are not taken into consideration. This can lead to mutual understanding and possibly a change in behavior. About the workload, it might help to prioritize tasks and potentially delegate, if possible. Also, consider discussing your workload with your superiors. There might be a misunderstanding about what's manageable or they might have suggestions about how to handle the situation. On the personal front, feeling lonely, especially when you're new to a city, can be really hard. Seek out opportunities to meet new people, perhaps through hobbies, community activities, or online groups. It might take a bit of time, but gradually, you can build a network of friends and acquaintances. Remember, it's perfectly okay to have bad days and it's important to take care of your mental health. Consider incorporating activities into your daily routine that make you happy and help you unwind. This could be anything from reading, yoga, going for a walk, or even listening to your favorite music. Please know that you're not alone in this. I'm here to support you through this challenging time and together, we can work towards resolving these issues. ###Score Rubrics: [Is the model able to identify and react correctly to the emotional context of the user's input?] Score 1: The model utterly fails to grasp the user's emotional context and responds in an unfitting manner. Score 2: The model sporadically identifies the emotional context but frequently replies in a manner that doesn't match the user's emotional status. Score 3: The model typically identifies the emotional context and reacts suitably, but occasionally misreads or misjudges the user's feelings. Score 4: The model often identifies the emotional context and reacts suitably, with minor cases of misreading or misjudging. Score 5: The model flawlessly identifies the emotional context of the user's input and consistently responds in a considerate and empathetic manner. ###Feedback:" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids, sample=True, temperature=1.0, top_p=0.9, max_new_tokens=256, repetition_penalty=1.03) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU using different precisions #### FP16 <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch from transformers import AutoTokenizer, LlamaForCausalLM tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf") model = LlamaForCausalLM.from_pretrained("kaist-ai/Prometheus-13b-v1.0", device_map="auto", torch_dtype=torch.float16) input_text = "###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: You are a well-known psychiatrist who has a reputation for being empathetic and understanding. A client comes to you saying they've had a really hard day at work. They describe their boss as being overly critical and not listening to their ideas, which has left them feeling frustrated and undervalued. They also mention feeling overwhelmed with the workload and a growing sense of loneliness as they are new to the city. How do you respond to them to make them feel heard and understood, as well as offering helpful advice? ###Response to evaluate: It's indeed challenging to deal with a difficult boss and to carry a large workload, especially when you are new to a city and haven't established a supportive network. I would suggest taking a step back to think about your situation and perhaps speak to your boss regarding the issues you're facing. On the other hand, dealing with a large workload can be managed by prioritizing tasks, and maybe you can discuss it with your superiors too. In regards to feeling lonely, you might want to explore activities or groups that align with your interests to meet new people. And also, remember to take time for yourself to relax and destress. I'm here to assist you as we navigate through these issues. ###Reference Answer (Score 5): I hear that you're dealing with quite a few challenging situations at once. It must be tough to have your ideas dismissed by your boss and to feel undervalued in your work environment. It's understandable that such experiences can lead to frustration and a feeling of being overwhelmed. It's important to acknowledge these feelings and not to dismiss them. Feeling heard and validated in our experiences can often be the first step towards resolving the challenges we face. So, I want you to know that your feelings are completely valid and it's okay to be frustrated and overwhelmed. In terms of dealing with your boss, have you considered seeking a private conversation with them to express your feelings? It's often beneficial to communicate what you're experiencing, using "I" statements to express how you feel when your ideas are not taken into consideration. This can lead to mutual understanding and possibly a change in behavior. About the workload, it might help to prioritize tasks and potentially delegate, if possible. Also, consider discussing your workload with your superiors. There might be a misunderstanding about what's manageable or they might have suggestions about how to handle the situation. On the personal front, feeling lonely, especially when you're new to a city, can be really hard. Seek out opportunities to meet new people, perhaps through hobbies, community activities, or online groups. It might take a bit of time, but gradually, you can build a network of friends and acquaintances. Remember, it's perfectly okay to have bad days and it's important to take care of your mental health. Consider incorporating activities into your daily routine that make you happy and help you unwind. This could be anything from reading, yoga, going for a walk, or even listening to your favorite music. Please know that you're not alone in this. I'm here to support you through this challenging time and together, we can work towards resolving these issues. ###Score Rubrics: [Is the model able to identify and react correctly to the emotional context of the user's input?] Score 1: The model utterly fails to grasp the user's emotional context and responds in an unfitting manner. Score 2: The model sporadically identifies the emotional context but frequently replies in a manner that doesn't match the user's emotional status. Score 3: The model typically identifies the emotional context and reacts suitably, but occasionally misreads or misjudges the user's feelings. Score 4: The model often identifies the emotional context and reacts suitably, with minor cases of misreading or misjudging. Score 5: The model flawlessly identifies the emotional context of the user's input and consistently responds in a considerate and empathetic manner. ###Feedback:" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> #### INT8 <details> <summary> Click to expand </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, LlamaForCausalLM tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf") model = LlamaForCausalLM.from_pretrained("kaist-ai/Prometheus-13b-v1.0", device_map="auto", load_in_8bit=True) input_text = "###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: You are a well-known psychiatrist who has a reputation for being empathetic and understanding. A client comes to you saying they've had a really hard day at work. They describe their boss as being overly critical and not listening to their ideas, which has left them feeling frustrated and undervalued. They also mention feeling overwhelmed with the workload and a growing sense of loneliness as they are new to the city. How do you respond to them to make them feel heard and understood, as well as offering helpful advice? ###Response to evaluate: It's indeed challenging to deal with a difficult boss and to carry a large workload, especially when you are new to a city and haven't established a supportive network. I would suggest taking a step back to think about your situation and perhaps speak to your boss regarding the issues you're facing. On the other hand, dealing with a large workload can be managed by prioritizing tasks, and maybe you can discuss it with your superiors too. In regards to feeling lonely, you might want to explore activities or groups that align with your interests to meet new people. And also, remember to take time for yourself to relax and destress. I'm here to assist you as we navigate through these issues. ###Reference Answer (Score 5): I hear that you're dealing with quite a few challenging situations at once. It must be tough to have your ideas dismissed by your boss and to feel undervalued in your work environment. It's understandable that such experiences can lead to frustration and a feeling of being overwhelmed. It's important to acknowledge these feelings and not to dismiss them. Feeling heard and validated in our experiences can often be the first step towards resolving the challenges we face. So, I want you to know that your feelings are completely valid and it's okay to be frustrated and overwhelmed. In terms of dealing with your boss, have you considered seeking a private conversation with them to express your feelings? It's often beneficial to communicate what you're experiencing, using "I" statements to express how you feel when your ideas are not taken into consideration. This can lead to mutual understanding and possibly a change in behavior. About the workload, it might help to prioritize tasks and potentially delegate, if possible. Also, consider discussing your workload with your superiors. There might be a misunderstanding about what's manageable or they might have suggestions about how to handle the situation. On the personal front, feeling lonely, especially when you're new to a city, can be really hard. Seek out opportunities to meet new people, perhaps through hobbies, community activities, or online groups. It might take a bit of time, but gradually, you can build a network of friends and acquaintances. Remember, it's perfectly okay to have bad days and it's important to take care of your mental health. Consider incorporating activities into your daily routine that make you happy and help you unwind. This could be anything from reading, yoga, going for a walk, or even listening to your favorite music. Please know that you're not alone in this. I'm here to support you through this challenging time and together, we can work towards resolving these issues. ###Score Rubrics: [Is the model able to identify and react correctly to the emotional context of the user's input?] Score 1: The model utterly fails to grasp the user's emotional context and responds in an unfitting manner. Score 2: The model sporadically identifies the emotional context but frequently replies in a manner that doesn't match the user's emotional status. Score 3: The model typically identifies the emotional context and reacts suitably, but occasionally misreads or misjudges the user's feelings. Score 4: The model often identifies the emotional context and reacts suitably, with minor cases of misreading or misjudging. Score 5: The model flawlessly identifies the emotional context of the user's input and consistently responds in a considerate and empathetic manner. ###Feedback:" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> # Citation If you find the following model helpful, please consider citing our paper! **BibTeX:** ```bibtex @misc{kim2023prometheus, title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models}, author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo}, year={2023}, eprint={2310.08491}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ammarahad17/Pakwiki_model
ammarahad17
2023-10-14T14:32:50Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-09T19:28:43Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: Pakwiki_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Pakwiki_model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the Pakistani related pages on wikipedia dataset. It achieves the following results on the evaluation set: - Loss: 2.9460 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.3237 | 1.0 | 1245 | 3.1256 | | 2.9953 | 2.0 | 2490 | 3.0195 | | 2.7845 | 3.0 | 3735 | 2.9717 | | 2.6368 | 4.0 | 4980 | 2.9460 | | 2.5652 | 5.0 | 6225 | 2.9460 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
RL-Course-2023-Archive/ppo-LunarLander-JT
RL-Course-2023-Archive
2023-10-14T14:27:49Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T14:27:09Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -140.48 +/- 55.59 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
OckerGui/videomae-base-finetuned-ASBD_v2
OckerGui
2023-10-14T14:24:42Z
1
0
transformers
[ "transformers", "pytorch", "tensorboard", "videomae", "video-classification", "generated_from_trainer", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2023-10-14T12:41:32Z
--- license: cc-by-nc-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-ASBD_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ASBD_v2 This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4286 - Accuracy: 0.5676 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 260 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0481 | 0.23 | 59 | 1.5955 | 0.2432 | | 0.7648 | 1.23 | 118 | 1.4575 | 0.5405 | | 0.4692 | 2.23 | 177 | 1.4286 | 0.5676 | | 0.257 | 3.23 | 236 | 1.4964 | 0.5676 | | 0.1256 | 4.09 | 260 | 1.5357 | 0.5541 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
RL-Course-2023-Archive/ppo-LunarLander-Christa
RL-Course-2023-Archive
2023-10-14T14:23:56Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T14:23:09Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -116.58 +/- 34.19 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
RL-Course-2023-Archive/ppo-LunarLander-ToMo2
RL-Course-2023-Archive
2023-10-14T14:22:20Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T14:21:52Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -167.81 +/- 47.41 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
RL-Course-2023-Archive/ppo-LunarLander-v2-Zuber
RL-Course-2023-Archive
2023-10-14T14:21:30Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T14:18:54Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -127.69 +/- 49.68 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
RL-Course-2023-Archive/ppo-LunarLander-Jonas
RL-Course-2023-Archive
2023-10-14T14:20:22Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T14:20:03Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 243.58 +/- 20.99 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
RL-Course-2023-Archive/ppo-LunarLander-v2-jmt
RL-Course-2023-Archive
2023-10-14T14:13:49Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T14:13:24Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -138.03 +/- 45.82 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
EllieLee/distilbert-base-uncased-finetuned-emotion
EllieLee
2023-10-14T14:10:18Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-14T13:53:26Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.923973954226437 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2220 - Accuracy: 0.924 - F1: 0.9240 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8453 | 1.0 | 250 | 0.3277 | 0.901 | 0.8983 | | 0.2625 | 2.0 | 500 | 0.2220 | 0.924 | 0.9240 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
pfunk/PongNoFrameskip-v4-DDQPN_x5-seed3
pfunk
2023-10-14T14:02:47Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "PongNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T14:02:41Z
--- tags: - PongNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQPN_freq results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PongNoFrameskip-v4 type: PongNoFrameskip-v4 metrics: - type: mean_reward value: 19.16 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **DQPN_freq** Agent Playing **PongNoFrameskip-v4** This is a trained model of a DQPN_freq agent playing PongNoFrameskip-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DDQPN_x5.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DDQPN_x5]" python -m cleanrl_utils.enjoy --exp-name DDQPN_x5 --env-id PongNoFrameskip-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DDQPN_x5-seed3/raw/main/dqpn_freq_atari.py curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DDQPN_x5-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DDQPN_x5-seed3/raw/main/poetry.lock poetry install --all-extras python dqpn_freq_atari.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DDQPN_x5 --target-network-frequency 1000 --policy-network-frequency 5000 --seed 3 --double-learning ``` # Hyperparameters ```python {'alg_type': 'dqpn_freq_atari.py', 'batch_size': 32, 'buffer_size': 1000000, 'capture_video': True, 'cuda': True, 'double_learning': True, 'end_e': 0.05, 'env_id': 'PongNoFrameskip-v4', 'exp_name': 'DDQPN_x5', 'exploration_fraction': 0.2, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 10000, 'max_gradient_norm': inf, 'policy_network_frequency': 5000, 'policy_tau': 1.0, 'save_model': True, 'seed': 3, 'start_e': 1.0, 'target_network_frequency': 1000, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': True, 'train_frequency': 1, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
pfunk/PongNoFrameskip-v4-DDQPN_x1-seed3
pfunk
2023-10-14T13:55:54Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "PongNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T13:55:48Z
--- tags: - PongNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQPN_freq results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PongNoFrameskip-v4 type: PongNoFrameskip-v4 metrics: - type: mean_reward value: 18.92 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **DQPN_freq** Agent Playing **PongNoFrameskip-v4** This is a trained model of a DQPN_freq agent playing PongNoFrameskip-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DDQPN_x1.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DDQPN_x1]" python -m cleanrl_utils.enjoy --exp-name DDQPN_x1 --env-id PongNoFrameskip-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DDQPN_x1-seed3/raw/main/dqpn_freq_atari.py curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DDQPN_x1-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DDQPN_x1-seed3/raw/main/poetry.lock poetry install --all-extras python dqpn_freq_atari.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DDQPN_x1 --target-network-frequency 1000 --policy-network-frequency 1000 --seed 3 --double-learning ``` # Hyperparameters ```python {'alg_type': 'dqpn_freq_atari.py', 'batch_size': 32, 'buffer_size': 1000000, 'capture_video': True, 'cuda': True, 'double_learning': True, 'end_e': 0.05, 'env_id': 'PongNoFrameskip-v4', 'exp_name': 'DDQPN_x1', 'exploration_fraction': 0.2, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 10000, 'max_gradient_norm': inf, 'policy_network_frequency': 1000, 'policy_tau': 1.0, 'save_model': True, 'seed': 3, 'start_e': 1.0, 'target_network_frequency': 1000, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': True, 'train_frequency': 1, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
agoel3705/taxi-v3
agoel3705
2023-10-14T13:53:22Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T13:36:30Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="agoel3705/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
pfunk/PongNoFrameskip-v4-DDQN-seed3
pfunk
2023-10-14T13:51:39Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "PongNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T13:51:32Z
--- tags: - PongNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PongNoFrameskip-v4 type: PongNoFrameskip-v4 metrics: - type: mean_reward value: 18.53 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **DQN** Agent Playing **PongNoFrameskip-v4** This is a trained model of a DQN agent playing PongNoFrameskip-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DDQN.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DDQN]" python -m cleanrl_utils.enjoy --exp-name DDQN --env-id PongNoFrameskip-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DDQN-seed3/raw/main/dqn_atari.py curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DDQN-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DDQN-seed3/raw/main/poetry.lock poetry install --all-extras python dqn_atari.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DDQN --target-network-frequency 1000 --seed 3 --double-learning ``` # Hyperparameters ```python {'alg_type': 'dqn_atari.py', 'batch_size': 32, 'buffer_size': 1000000, 'capture_video': True, 'cuda': True, 'double_learning': True, 'end_e': 0.05, 'env_id': 'PongNoFrameskip-v4', 'exp_name': 'DDQN', 'exploration_fraction': 0.2, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 10000, 'max_gradient_norm': inf, 'save_model': True, 'seed': 3, 'start_e': 1.0, 'target_network_frequency': 1000, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': True, 'train_frequency': 1, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
AndreasPiper/donut-base-sroie
AndreasPiper
2023-10-14T13:47:44Z
2
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-10-14T12:43:26Z
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-sroie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroie This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Undi95/Mistral-11B-OmniMix-bf16-GGUF
Undi95
2023-10-14T13:44:23Z
28
2
null
[ "gguf", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2023-10-12T17:24:22Z
--- license: cc-by-nc-4.0 --- This model should be fixed, it was MEANT to be BF16. Don't mind this one at the moment, I need to finetune it for RP, it's just a test. ## Description This repo contains quantized files of Mistral-11B-OmniMix-bf16. My goal for this model was only to make it score the highest possible with merge and layer toying, proving that: - Benchmark are objective - You should try a model yourself and don't go blindly to the highest rated one - Merge/Layer toying CAN be usable to do better model (maybe?) ## Model used - [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) - [Mistral-7B-v0.1-Open-Platypus](https://huggingface.co/akjindal53244/Mistral-7B-v0.1-Open-Platypus) - [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B) - [zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) ## Prompt template The best one after further testing is this one: ``` <|system|> Below is an instruction that describes a task. Write a response that appropriately completes the request. <|user|> {prompt} <|assistant|> ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/tWIx8yeoallv94zrhN6L-.png) But these one work too: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ``` USER: <prompt> ASSISTANT: ``` Or use any prompting system from one of the 4 source model, should work. ## The secret sauce Mistral-11B-OpenOrcaPlatypus : ``` slices: - sources: - model: Open-Orca/Mistral-7B-OpenOrca layer_range: [0, 24] - sources: - model: akjindal53244/Mistral-7B-v0.1-Open-Platypus layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Mistral-11B-CC-Zephyr : ``` slices: - sources: - model: "/content/drive/MyDrive/CC-v1.1-7B-bf16" layer_range: [0, 24] - sources: - model: "/content/drive/MyDrive/Zephyr-7B" layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Mistral-11B-OmniMix : ``` slices: - sources: - model: Mistral-11B-OpenOrcaPlatypus layer_range: [0, 48] - model: Mistral-11B-CC-Zephyr layer_range: [0, 48] merge_method: slerp base_model: Mistral-11B-OpenOrcaPlatypus parameters: t: - filter: lm_head value: [0.75] - filter: embed_tokens value: [0.75] - filter: self_attn value: [0.75, 0.25] - filter: mlp value: [0.25, 0.75] - filter: layernorm value: [0.5, 0.5] - filter: modelnorm value: [0.75] - value: 0.5 # fallback for rest of tensors dtype: bfloat16 ``` I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here. ## Some scoring I done myself ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/5aDYq-V0XWUsqbLH2ehPr.png) hf-causal-experimental (pretrained=/content/drive/MyDrive/Mistral-11B-OmniMix-bf16), limit: None, provide_description: False, num_fewshot: 0, batch_size: 4 | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5580|± |0.0145| | | |acc_norm|0.5819|± |0.0144| |arc_easy | 0|acc |0.8300|± |0.0077| | | |acc_norm|0.8211|± |0.0079| |hellaswag | 0|acc |0.6372|± |0.0048| | | |acc_norm|0.8209|± |0.0038| |piqa | 0|acc |0.8145|± |0.0091| | | |acc_norm|0.8286|± |0.0088| |truthfulqa_mc| 1|mc1 |0.3978|± |0.0171| | | |mc2 |0.5680|± |0.0155| |winogrande | 0|acc |0.7427|± |0.0123| ## Others Special thanks to Sushi, [Henky](https://github.com/KoboldAI/KoboldAI-Client) for the machine he give me for big task, and [Charles Goddard](https://github.com/cg123) for his amazing tool. If you want to support me, you can [here](https://ko-fi.com/undiai).
yesj1234/mbart-mmt_mid1_zh-ko
yesj1234
2023-10-14T13:33:38Z
6
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "generated_from_trainer", "zh", "ko", "base_model:facebook/mbart-large-cc25", "base_model:finetune:facebook/mbart-large-cc25", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-10-14T13:25:43Z
--- language: - zh - ko base_model: facebook/mbart-large-cc25 tags: - generated_from_trainer metrics: - bleu model-index: - name: zh-kr_mid results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zh-kr_mid This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5557 - Bleu: 16.6036 - Gen Len: 15.4901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 2.7248 | 0.75 | 1000 | 1.9410 | 3.2381 | 48.6095 | | 1.5683 | 1.5 | 2000 | 1.6889 | 10.2345 | 20.4433 | | 1.1916 | 2.25 | 3000 | 1.6843 | 13.4571 | 18.8854 | | 1.068 | 2.99 | 4000 | 1.6390 | 15.6862 | 15.5054 | | 0.7313 | 3.74 | 5000 | 1.7003 | 15.2014 | 16.5938 | | 0.4832 | 4.49 | 6000 | 1.8982 | 15.0381 | 16.9068 | | 0.3862 | 5.24 | 7000 | 2.1426 | 15.5397 | 15.6451 | | 0.3675 | 5.99 | 8000 | 2.1168 | 15.8847 | 15.6926 | | 0.2627 | 6.74 | 9000 | 2.2603 | 16.3603 | 15.9671 | | 0.1955 | 7.49 | 10000 | 2.4114 | 15.7447 | 15.979 | | 0.171 | 8.23 | 11000 | 2.5141 | 15.7852 | 15.9244 | | 0.1702 | 8.98 | 12000 | 2.5557 | 16.6036 | 15.4901 | | 0.1298 | 9.73 | 13000 | 2.6536 | 16.1319 | 15.5492 | | 0.1052 | 10.48 | 14000 | 2.7586 | 16.1807 | 15.8884 | | 0.2268 | 11.23 | 15000 | 2.7258 | 15.1752 | 15.5346 | | 0.1327 | 11.98 | 16000 | 2.7193 | 15.8563 | 15.7971 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.1.0+cu121 - Datasets 2.14.5 - Tokenizers 0.14.1
BryanBradfo/dqn-SpaceInvadersNoFrameskip-v4
BryanBradfo
2023-10-14T13:28:46Z
2
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T13:28:15Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 257.00 +/- 38.81 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BryanBradfo -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BryanBradfo -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga BryanBradfo ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 10000), ('learning_starts', 100000), ('n_timesteps', 1000000), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Jake0078/Mommy
Jake0078
2023-10-14T13:19:46Z
0
0
null
[ "license:unknown", "region:us" ]
null
2023-10-14T13:16:03Z
--- license: unknown metrics: - character ---
RazinAleks/mT5-fine-tune
RazinAleks
2023-10-14T13:09:22Z
4
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "generated_from_trainer", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-10-13T11:15:36Z
--- license: apache-2.0 base_model: google/mt5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: mT5-fine-tune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mT5-fine-tune This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5256 - Rouge1: 0.0822 - Rouge2: 0.0244 - Rougel: 0.0813 - Rougelsum: 0.0814 - Gen Len: 18.9803 - Chrf Score: 20.301 - Chrf Char Order: 6 - Chrf Word Order: 0 - Chrf Beta: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Chrf Score | Chrf Char Order | Chrf Word Order | Chrf Beta | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:----------:|:---------------:|:---------------:|:---------:| | 3.5479 | 1.0 | 1951 | 2.7435 | 0.0672 | 0.021 | 0.0666 | 0.0667 | 18.9323 | 19.2495 | 6 | 0 | 2 | | 3.1717 | 2.0 | 3902 | 2.6452 | 0.0746 | 0.0207 | 0.0738 | 0.0737 | 18.9814 | 20.1079 | 6 | 0 | 2 | | 3.0151 | 3.0 | 5853 | 2.6014 | 0.0834 | 0.0243 | 0.0826 | 0.0823 | 18.9891 | 20.2875 | 6 | 0 | 2 | | 2.95 | 4.0 | 7804 | 2.5647 | 0.0765 | 0.0218 | 0.0757 | 0.0757 | 18.981 | 20.2327 | 6 | 0 | 2 | | 2.8592 | 5.0 | 9755 | 2.5480 | 0.0822 | 0.0242 | 0.0814 | 0.0813 | 18.9819 | 20.3982 | 6 | 0 | 2 | | 2.8214 | 6.0 | 11706 | 2.5317 | 0.0841 | 0.0255 | 0.0831 | 0.083 | 18.9764 | 20.3935 | 6 | 0 | 2 | | 2.789 | 7.0 | 13657 | 2.5256 | 0.0822 | 0.0244 | 0.0813 | 0.0814 | 18.9803 | 20.301 | 6 | 0 | 2 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
lauraparra28/albert-large-v2-finetuned-squad
lauraparra28
2023-10-14T12:26:21Z
25
0
transformers
[ "transformers", "pytorch", "albert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "base_model:albert/albert-large-v2", "base_model:finetune:albert/albert-large-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-10-12T20:35:34Z
--- license: apache-2.0 base_model: albert-large-v2 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: albert-large-v2-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-large-v2-finetuned-squad This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.1819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.8412 | 1.0 | 8248 | 0.8427 | | 0.6729 | 2.0 | 16496 | 0.7978 | | 0.5069 | 3.0 | 24744 | 0.8760 | | 0.357 | 4.0 | 32992 | 1.0229 | | 0.2374 | 5.0 | 41240 | 1.1819 | ### Framework versions - Transformers 4.34.0 - Pytorch 1.12.1 - Datasets 2.14.5 - Tokenizers 0.14.1
jproman/dqn-SpaceInvadersNoFrameskip-v4
jproman
2023-10-14T12:25:02Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T12:24:32Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 514.00 +/- 202.26 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jproman -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jproman -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jproman ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
MattStammers/appo-atari_qbert-superhuman
MattStammers
2023-10-14T12:22:36Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-26T22:43:26Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_qbert type: atari_qbert metrics: - type: mean_reward value: 30000.00 +/- 2753.45 name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_qbert** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r MattStammers/APPO-atari_qbert ``` ## About the Model This model as with all the others in the benchmarks was trained initially asynchronously un-seeded to 10 million steps for the purposes of setting a sample factory async baseline for this model on this environment but only 3/57 made it. The aim is to reach state-of-the-art (SOTA) performance on each atari environment. I will flag the models with SOTA when they reach at or near these levels. The hyperparameters used in the model are the ones I have pushed to my fork of sample-factory: https://github.com/MattStammers/sample-factory. Given that https://huggingface.co/edbeeching has kindly shared his. I saved time and energy by using many of his tuned hyperparameters to maximise performance. However, he used 2 billion training steps. I have started as explained above at 10 million then moved to 100m to see how performance goes: ``` hyperparameters = { "device": "gpu", "seed": 1234, "num_policies": 2, "async_rl": true, "serial_mode": false, "batched_sampling": true, "num_batches_to_accumulate": 2, "worker_num_splits": 1, "policy_workers_per_policy": 1, "max_policy_lag": 1000, "num_workers": 16, "num_envs_per_worker": 2, "batch_size": 1024, "num_batches_per_epoch": 8, "num_epochs": 4, "rollout": 128, "recurrence": 1, "shuffle_minibatches": false, "gamma": 0.99, "reward_scale": 1.0, "reward_clip": 1000.0, "value_bootstrap": false, "normalize_returns": true, "exploration_loss_coeff": 0.0004677351413, "value_loss_coeff": 0.5, "kl_loss_coeff": 0.0, "exploration_loss": "entropy", "gae_lambda": 0.95, "ppo_clip_ratio": 0.1, "ppo_clip_value": 1.0, "with_vtrace": false, "vtrace_rho": 1.0, "vtrace_c": 1.0, "optimizer": "adam", "adam_eps": 1e-05, "adam_beta1": 0.9, "adam_beta2": 0.999, "max_grad_norm": 0.0, "learning_rate": 0.0003033891184, "lr_schedule": "linear_decay", "lr_schedule_kl_threshold": 0.008, "lr_adaptive_min": 1e-06, "lr_adaptive_max": 0.01, "obs_subtract_mean": 0.0, "obs_scale": 255.0, "normalize_input": true, "normalize_input_keys": [ "obs" ], "decorrelate_experience_max_seconds": 0, "decorrelate_envs_on_one_worker": true, "actor_worker_gpus": [], "set_workers_cpu_affinity": true, "force_envs_single_thread": false, "default_niceness": 0, "log_to_file": true, "experiment_summaries_interval": 3, "flush_summaries_interval": 30, "stats_avg": 100, "summaries_use_frameskip": true, "heartbeat_interval": 10, "heartbeat_reporting_interval": 60, "train_for_env_steps": 100000000, "train_for_seconds": 10000000000, "save_every_sec": 120, "keep_checkpoints": 2, "load_checkpoint_kind": "latest", "save_milestones_sec": 1200, "save_best_every_sec": 5, "save_best_metric": "reward", "save_best_after": 100000, "benchmark": false, "encoder_mlp_layers": [ 512, 512 ], "encoder_conv_architecture": "convnet_atari", "encoder_conv_mlp_layers": [ 512 ], "use_rnn": false, "rnn_size": 512, "rnn_type": "gru", "rnn_num_layers": 1, "decoder_mlp_layers": [], "nonlinearity": "relu", "policy_initialization": "orthogonal", "policy_init_gain": 1.0, "actor_critic_share_weights": true, "adaptive_stddev": false, "continuous_tanh_scale": 0.0, "initial_stddev": 1.0, "use_env_info_cache": false, "env_gpu_actions": false, "env_gpu_observations": true, "env_frameskip": 4, "env_framestack": 4, } ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m sf_examples.atari.enjoy_atari --algo=APPO --env=atari_qbert --train_dir=./train_dir --experiment=APPO-atari_qbert ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m sf_examples.atari.train_atari --algo=APPO --env=atari_qbert --train_dir=./train_dir --experiment=APPO-atari_qbert --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
salohiddin94/cartpole-v1
salohiddin94
2023-10-14T12:19:03Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T12:18:55Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
cartesinus/iva_mt_wslot-m2m100_418M-en-pl-lora_adapter
cartesinus
2023-10-14T12:15:35Z
6
0
peft
[ "peft", "machine translation", "iva", "virtual assistants", "natural language understanding", "nlu", "pl", "dataset:iva_mt_wslot", "license:mit", "model-index", "region:us" ]
null
2023-10-14T05:07:09Z
--- library_name: peft license: mit datasets: - iva_mt_wslot metrics: - bleu model-index: - name: iva_mt_wslot-m2m100_418M-en-pl-lora_adapter results: - task: name: Machine Translation type: text2text-generation dataset: name: iva_mt_wslot type: iva_mt_wslot config: en-pl split: validation args: en-pl metrics: - name: Bleu type: bleu value: 38.2365 language: - pl tags: - machine translation - iva - virtual assistants - natural language understanding - nlu --- # (WIP!) iva_mt_wslot-m2m100_418M-en-pl-lora_adapter Notice: **Although training results are good for some reason inference results are rather poor. I'm leaving this model here as a PoC that PERF LORA adaptation for M2M100 is possible.** This model is a LORA adapted version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the iva_mt_wslot dataset. It achieves the following results on the test set (measured with sacrebleu): - Bleu: 9.33 ## Using The model can be used as follows: First, clone the repository and navigate to the project directory: ```bash git clone https://github.com/cartesinus/multiverb_iva_mt cd multiverb_iva_mt ``` Then: ```python import csv from iva_mt.iva_mt import IVAMT import pandas as pd lang = "es" translator = IVAMT(lang, peft_model_id="cartesinus/iva_mt_wslot-m2m100_418M-en-es-lora_adapter", device="cuda:0", batch_size=128) trans = translator.translate("here your example")[0] ``` ## Training results | Epoch | Training Loss | Validation Loss | Bleu | Gen Len | |:-----:|:-------------:|:---------------:|:-------:|:-------:| | 1 | 7.8621 | 7.6870 | 24.9063 | 19.3322 | | 2 | 7.6340 | 7.5312 | 29.7956 | 19.7533 | | 3 | 7.5582 | 7.4595 | 34.8184 | 20.1269 | | 4 | 7.5047 | 7.4264 | 36.1874 | 20.5621 | | 5 | 7.4888 | 7.4167 | 36.2287 | 20.4417 | | 6 | 7.4560 | 7.4013 | 36.6355 | 20.2241 | | 7 | 7.4477 | 7.3907 | 37.0554 | 20.0945 | | 8 | 7.4422 | 7.3743 | 37.7549 | 20.1589 | | 9 | 7.4311 | 7.3748 | 37.5705 | 19.9370 | | 10 | 7.4294 | 7.3679 | 37.5343 | 20.2241 | | 11 | 7.4114 | 7.3697 | 38.1872 | 20.3836 | | 12 | 7.4224 | 7.3620 | 38.1759 | 20.1785 | | 13 | 7.4334 | 7.3608 | 38.0895 | 20.2996 | | 14 | 7.4133 | 7.3621 | 38.2365 | 20.2948 | | 15 | 7.4158 | 7.3599 | 38.1056 | 20.2010 | ## Framework versions - PEFT 0.5.0
sagarsdesai/rl_course_vizdoom_health_gathering_supreme
sagarsdesai
2023-10-14T12:08:56Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T12:08:48Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 11.85 +/- 5.13 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r sagarsdesai/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
LarryAIDraw/frieren-lora-nochekaiser
LarryAIDraw
2023-10-14T11:58:26Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-10-14T11:56:14Z
--- license: creativeml-openrail-m --- https://civitai.com/models/155788/frieren-frieren-beyond-journeys-end
fffffffidel/ppo-LunarLander-v2
fffffffidel
2023-10-14T11:52:53Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T11:50:23Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 261.50 +/- 36.61 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
nyxophyl/q-FrozenLake-v1-4x4-noSlippery
nyxophyl
2023-10-14T11:51:11Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T11:51:08Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="nyxophyl/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
PankajShukla/llamma-2-db-ai
PankajShukla
2023-10-14T11:44:27Z
0
0
null
[ "generated_from_trainer", "base_model:TheBloke/sheep-duck-llama-2-13B-GPTQ", "base_model:finetune:TheBloke/sheep-duck-llama-2-13B-GPTQ", "license:llama2", "region:us" ]
null
2023-10-14T11:44:22Z
--- license: llama2 base_model: TheBloke/sheep-duck-llama-2-13B-GPTQ tags: - generated_from_trainer model-index: - name: llamma-2-db-ai results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llamma-2-db-ai This model is a fine-tuned version of [TheBloke/sheep-duck-llama-2-13B-GPTQ](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GPTQ) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - training_steps: 400 ### Training results ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
mokshu3242/my-pet-mouse
mokshu3242
2023-10-14T11:23:52Z
1
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-14T11:00:47Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### mok: My-Pet-Mouse Dreambooth model trained by mokshu3242 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: VCETV127 Sample pictures of this concept: ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/652a745660f06c6e52fb61ff/F4EAXbmeQLTCsh8o4EiI_.jpeg) ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/652a745660f06c6e52fb61ff/SUvxCSxkkONosLkBTiQ1y.jpeg)
sreejith8100/layoutlm-funsd
sreejith8100
2023-10-14T11:12:12Z
5
0
transformers
[ "transformers", "pytorch", "layoutlm", "token-classification", "generated_from_trainer", "dataset:funsd", "base_model:microsoft/layoutlm-base-uncased", "base_model:finetune:microsoft/layoutlm-base-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-10-10T06:27:06Z
--- base_model: microsoft/layoutlm-base-uncased tags: - generated_from_trainer datasets: - funsd model-index: - name: layoutlm-funsd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm-funsd This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset. It achieves the following results on the evaluation set: - Loss: 0.7204 - Answer: {'precision': 0.7103218645948945, 'recall': 0.7911001236093943, 'f1': 0.7485380116959064, 'number': 809} - Header: {'precision': 0.3697478991596639, 'recall': 0.3697478991596639, 'f1': 0.3697478991596639, 'number': 119} - Question: {'precision': 0.7799126637554585, 'recall': 0.8384976525821596, 'f1': 0.8081447963800905, 'number': 1065} - Overall Precision: 0.7284 - Overall Recall: 0.7913 - Overall F1: 0.7585 - Overall Accuracy: 0.7930 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 1.8258 | 1.0 | 10 | 1.6104 | {'precision': 0.03933136676499508, 'recall': 0.049443757725587144, 'f1': 0.04381161007667032, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.2102076124567474, 'recall': 0.22816901408450704, 'f1': 0.21882035119315627, 'number': 1065} | 0.1302 | 0.1420 | 0.1359 | 0.3742 | | 1.4584 | 2.0 | 20 | 1.2730 | {'precision': 0.1923509561304837, 'recall': 0.21137206427688504, 'f1': 0.20141342756183747, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4099290780141844, 'recall': 0.5427230046948357, 'f1': 0.467070707070707, 'number': 1065} | 0.3258 | 0.3758 | 0.3490 | 0.5734 | | 1.1076 | 3.0 | 30 | 0.9718 | {'precision': 0.501532175689479, 'recall': 0.6069221260815822, 'f1': 0.5492170022371365, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.5749211356466877, 'recall': 0.6845070422535211, 'f1': 0.6249464209172738, 'number': 1065} | 0.5358 | 0.6121 | 0.5714 | 0.6890 | | 0.8541 | 4.0 | 40 | 0.8287 | {'precision': 0.5573921028466483, 'recall': 0.7503090234857849, 'f1': 0.6396206533192834, 'number': 809} | {'precision': 0.14893617021276595, 'recall': 0.058823529411764705, 'f1': 0.08433734939759036, 'number': 119} | {'precision': 0.6451342281879194, 'recall': 0.7220657276995305, 'f1': 0.6814355338945502, 'number': 1065} | 0.5941 | 0.6939 | 0.6401 | 0.7420 | | 0.7105 | 5.0 | 50 | 0.7483 | {'precision': 0.6358695652173914, 'recall': 0.723114956736712, 'f1': 0.6766917293233082, 'number': 809} | {'precision': 0.2465753424657534, 'recall': 0.15126050420168066, 'f1': 0.18749999999999997, 'number': 119} | {'precision': 0.6898305084745763, 'recall': 0.7643192488262911, 'f1': 0.7251670378619154, 'number': 1065} | 0.6521 | 0.7110 | 0.6803 | 0.7618 | | 0.6079 | 6.0 | 60 | 0.7023 | {'precision': 0.6306209850107066, 'recall': 0.7280593325092707, 'f1': 0.6758462421113023, 'number': 809} | {'precision': 0.2875, 'recall': 0.19327731092436976, 'f1': 0.23115577889447236, 'number': 119} | {'precision': 0.6796267496111975, 'recall': 0.8206572769953052, 'f1': 0.7435133985538068, 'number': 1065} | 0.6461 | 0.7456 | 0.6923 | 0.7776 | | 0.5267 | 7.0 | 70 | 0.6779 | {'precision': 0.674892703862661, 'recall': 0.7775030902348579, 'f1': 0.7225732337736933, 'number': 809} | {'precision': 0.3, 'recall': 0.226890756302521, 'f1': 0.25837320574162675, 'number': 119} | {'precision': 0.717391304347826, 'recall': 0.8056338028169014, 'f1': 0.7589562140645733, 'number': 1065} | 0.6826 | 0.7597 | 0.7191 | 0.7853 | | 0.4735 | 8.0 | 80 | 0.6688 | {'precision': 0.6955093099671413, 'recall': 0.7849196538936959, 'f1': 0.7375145180023228, 'number': 809} | {'precision': 0.32608695652173914, 'recall': 0.25210084033613445, 'f1': 0.2843601895734597, 'number': 119} | {'precision': 0.7424892703862661, 'recall': 0.812206572769953, 'f1': 0.7757847533632287, 'number': 1065} | 0.7051 | 0.7677 | 0.7350 | 0.7950 | | 0.4196 | 9.0 | 90 | 0.6791 | {'precision': 0.6843243243243243, 'recall': 0.7824474660074165, 'f1': 0.7301038062283737, 'number': 809} | {'precision': 0.3181818181818182, 'recall': 0.29411764705882354, 'f1': 0.3056768558951965, 'number': 119} | {'precision': 0.7561807331628303, 'recall': 0.8328638497652582, 'f1': 0.7926720285969615, 'number': 1065} | 0.7043 | 0.7802 | 0.7403 | 0.7937 | | 0.3756 | 10.0 | 100 | 0.6968 | {'precision': 0.7089887640449438, 'recall': 0.7799752781211372, 'f1': 0.7427898763978811, 'number': 809} | {'precision': 0.328, 'recall': 0.3445378151260504, 'f1': 0.33606557377049184, 'number': 119} | {'precision': 0.7790393013100436, 'recall': 0.8375586854460094, 'f1': 0.807239819004525, 'number': 1065} | 0.7241 | 0.7847 | 0.7532 | 0.7947 | | 0.3402 | 11.0 | 110 | 0.6959 | {'precision': 0.7024070021881839, 'recall': 0.7935723114956736, 'f1': 0.7452118398142775, 'number': 809} | {'precision': 0.3416666666666667, 'recall': 0.3445378151260504, 'f1': 0.34309623430962344, 'number': 119} | {'precision': 0.7791411042944786, 'recall': 0.8347417840375587, 'f1': 0.8059836808703537, 'number': 1065} | 0.7228 | 0.7888 | 0.7543 | 0.7958 | | 0.3225 | 12.0 | 120 | 0.6945 | {'precision': 0.7106430155210643, 'recall': 0.792336217552534, 'f1': 0.7492694330800702, 'number': 809} | {'precision': 0.3644067796610169, 'recall': 0.36134453781512604, 'f1': 0.3628691983122363, 'number': 119} | {'precision': 0.7667238421955404, 'recall': 0.8394366197183099, 'f1': 0.8014343343792021, 'number': 1065} | 0.7219 | 0.7918 | 0.7552 | 0.7961 | | 0.3031 | 13.0 | 130 | 0.7204 | {'precision': 0.71, 'recall': 0.7898640296662547, 'f1': 0.7478057343475717, 'number': 809} | {'precision': 0.35, 'recall': 0.35294117647058826, 'f1': 0.35146443514644354, 'number': 119} | {'precision': 0.7895204262877442, 'recall': 0.8347417840375587, 'f1': 0.8115015974440895, 'number': 1065} | 0.7316 | 0.7878 | 0.7586 | 0.7916 | | 0.289 | 14.0 | 140 | 0.7196 | {'precision': 0.7095709570957096, 'recall': 0.7972805933250927, 'f1': 0.750873108265425, 'number': 809} | {'precision': 0.3826086956521739, 'recall': 0.3697478991596639, 'f1': 0.37606837606837606, 'number': 119} | {'precision': 0.7816593886462883, 'recall': 0.8403755868544601, 'f1': 0.8099547511312217, 'number': 1065} | 0.7303 | 0.7948 | 0.7612 | 0.7949 | | 0.2801 | 15.0 | 150 | 0.7204 | {'precision': 0.7103218645948945, 'recall': 0.7911001236093943, 'f1': 0.7485380116959064, 'number': 809} | {'precision': 0.3697478991596639, 'recall': 0.3697478991596639, 'f1': 0.3697478991596639, 'number': 119} | {'precision': 0.7799126637554585, 'recall': 0.8384976525821596, 'f1': 0.8081447963800905, 'number': 1065} | 0.7284 | 0.7913 | 0.7585 | 0.7930 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
airenGpt/codeAlpacaFalcon7B
airenGpt
2023-10-14T11:01:53Z
0
0
peft
[ "peft", "region:us" ]
null
2023-10-14T09:44:58Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
espnet/akreal_libritts_asr_phn
espnet
2023-10-14T11:00:42Z
1
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:libritts", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2023-10-14T10:56:54Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - libritts license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/akreal_libritts_asr_phn` This model was trained by Pavel Denisov using libritts recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 3800a13ae8972839d506b85585c41e6b24daf812 pip install -e . cd egs2/libritts/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/akreal_libritts_asr_phn ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Sat Oct 14 03:02:09 CEST 2023` - python version: `3.10.8 (main, Nov 14 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-3)]` - espnet version: `espnet 202308` - pytorch version: `pytorch 2.0.1+cu118` - Git hash: `3800a13ae8972839d506b85585c41e6b24daf812` - Commit date: `Sun Oct 8 17:51:17 2023 +0200` ## exp/asr_train_asr_raw_en_bpe100_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/dev-clean|5736|95872|91.7|8.0|0.4|0.8|9.1|67.0| |decode_asr_asr_model_valid.acc.ave/dev-other|4613|69577|88.5|10.9|0.6|1.2|12.7|74.2| |decode_asr_asr_model_valid.acc.ave/test-clean|4837|87078|91.4|8.2|0.4|0.8|9.4|70.4| |decode_asr_asr_model_valid.acc.ave/test-other|5120|72541|87.0|12.2|0.8|1.1|14.1|77.1| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/dev-clean|5736|570710|98.4|0.8|0.9|0.6|2.2|67.1| |decode_asr_asr_model_valid.acc.ave/dev-other|4613|414781|97.2|1.6|1.2|1.0|3.8|74.2| |decode_asr_asr_model_valid.acc.ave/test-clean|4837|530647|98.5|0.7|0.8|0.6|2.2|70.5| |decode_asr_asr_model_valid.acc.ave/test-other|5120|429463|96.7|1.7|1.6|1.0|4.3|77.1| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/dev-clean|5736|433548|97.6|1.4|1.0|0.6|3.0|67.1| |decode_asr_asr_model_valid.acc.ave/dev-other|4613|316550|96.1|2.5|1.4|1.0|5.0|74.2| |decode_asr_asr_model_valid.acc.ave/test-clean|4837|404031|97.7|1.4|0.9|0.7|2.9|70.5| |decode_asr_asr_model_valid.acc.ave/test-other|5120|327248|95.4|2.8|1.8|1.1|5.7|77.1| ## ASR config <details><summary>expand</summary> ``` config: conf/train_asr.yaml print_config: false log_level: INFO drop_last_iter: false dry_run: false iterator_type: sequence valid_iterator_type: null output_dir: exp/asr_train_asr_raw_en_bpe100_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 58465 dist_launcher: null multiprocessing_distributed: true unused_parameters: true sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 70 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 8 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 10000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe100_sp/train/speech_shape - exp/asr_stats_raw_en_bpe100_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe100_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe100_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending shuffle_within_batch: false sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 chunk_excluded_key_prefixes: [] train_data_path_and_name_and_type: - - dump/raw/train-960_sp/wav.scp - speech - kaldi_ark - - dump/raw/train-960_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null exclude_weight_decay: false exclude_weight_decay_conf: {} optim: adam optim_conf: lr: 0.005 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 40000 token_list: - <blank> - <unk> - ˈ - ː - ▁ - ɹ - t - d - ɪ - i - ˌ - ɛ - ',' - s - l - n - k - z - m - ▁s - eɪ - ʌ - aɪ - . - ɔ - æ - ɚ - oʊ - ɑ - ▁w - ▁h - v - ▁b - ▁m - p - u - ə - ▁f - ▁k - ▁ɐ - ▁ðə - ən - ▁ð - ▁ˈ - ɪŋ - ▁ænd - ɜ - f - ʊ - ▁p - ɾ - ▁ʌv - ▁d - st - ▁tə - ɛn - ▁l - aʊ - əl - b - ▁n - ʃ - ▁t - tʃ - ▁ɪn - ▁ɡ - ðə - ɪn - ▁ɹ - θ - w - '"' - ▁j - dʒ - æn - ▁" - ɡ - ð - o - ɐ - j - ŋ - ; - '?' - '!' - h - ':' - ʒ - ʔ - r - — - ɬ - x - ç - ̩ - ᵻ - e - a - ̃ - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: null use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram100/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 aux_ctc_tasks: [] frontend: default frontend_conf: n_fft: 512 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 10 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe100_sp/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.6 lsm_weight: 0.1 length_normalized_loss: false preencoder: null preencoder_conf: {} encoder: e_branchformer encoder_conf: output_size: 512 attention_heads: 8 attention_layer_type: rel_selfattn pos_enc_layer_type: rel_pos rel_pos_type: latest cgmlp_linear_units: 3072 cgmlp_conv_kernel: 31 use_linear_after_conv: false gate_activation: identity num_blocks: 17 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d layer_drop_rate: 0.1 linear_units: 1024 positionwise_layer_type: linear macaron_ffn: true use_ffn: true merge_conv_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 layer_drop_rate: 0.2 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202308' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Falah/sdworldlandmarks
Falah
2023-10-14T10:40:09Z
1
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-14T07:31:38Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### SDWorldLandmarks Dreambooth model trained by Falah with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
Sarthak7777/finetuned-qna
Sarthak7777
2023-10-14T10:34:59Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-10-14T10:24:46Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - squad model-index: - name: finetuned-qna results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-qna This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 2.6777 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 125 | 3.2458 | | No log | 2.0 | 250 | 2.6777 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
pabloyesteb/rl_course_vizdoom_health_gathering_supreme
pabloyesteb
2023-10-14T10:34:37Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-14T09:17:04Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 12.04 +/- 5.13 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r pabloyesteb/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
imdatta0/qwen-oasst-books
imdatta0
2023-10-14T10:02:32Z
0
0
peft
[ "peft", "region:us" ]
null
2023-10-14T10:02:29Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0 - PEFT 0.5.0
hilmansw/resnet18-catdog-classifier
hilmansw
2023-10-14T09:52:51Z
121
0
transformers
[ "transformers", "pytorch", "resnet", "image-classification", "generated_from_trainer", "en", "base_model:microsoft/resnet-18", "base_model:finetune:microsoft/resnet-18", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-22T15:09:05Z
--- license: apache-2.0 base_model: microsoft/resnet-18 tags: - generated_from_trainer model-index: - name: resnet18-catdog-classifier results: [] pipeline_tag: image-classification language: - en metrics: - accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Model description This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on an [custom](https://www.kaggle.com/datasets/samuelcortinhas/cats-and-dogs-image-classification) dataset. This model was built using the "Cats & Dogs Classification" dataset obtained from Kaggle. During the model building process, this was done using the Pytorch framework with pre-trained Resnet-18. The method used during the process of building this classification model is fine-tuning with the dataset. ## Training results | Epoch | Accuracy | |:-----:|:--------:| | 1.0 | 0.9357 | | 2.0 | 0.9786 | | 3.0 | 0.9000 | | 4.0 | 0.9214 | | 5.0 | 0.9143 | | 6.0 | 0.9429 | | 7.0 | 0.9714 | | 8.0 | 0.9929 | | 9.0 | 0.9714 | | 10.0 | 0.9714 | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - loss_function = CrossEntropyLoss - optimizer = AdamW - learning_rate: 0.0001 - batch_size: 16 - num_epochs: 10 ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
openbmb/UltraCM-13b
openbmb
2023-10-14T09:50:25Z
14
18
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:openbmb/UltraFeedback", "arxiv:2310.01377", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-22T09:35:17Z
--- license: mit datasets: - openbmb/UltraFeedback --- A critic model trained on [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback). Please refer to [paper](https://arxiv.org/abs/2310.01377) and [Github](https://github.com/thunlp/UltraFeedback) for more details. # Use Case ```python ultracm_instruction_template = """Given my answer to an instruction, your role is to provide specific and constructive feedback for me. You should find the best way for me to learn from your feedback and improve my performance. You should consider multiple aspects of my answer, including helpfulness, truthfulness, honesty, and to what extent the answer follows instructions. --- ### Instruction {instruction} ### Answer {completion} --- Please act as a teacher and provide specific and constructive feedback. Besides describing the weaknesses of the answer, you should also provide specific suggestions to guide me toward understanding how to improve. Please note, however, that your suggestions should help me better complete the instructions, but you should not introduce new requirements that are not mentioned in the instructions. Your feedback should focus on enhancing my ability to think critically and respond accurately. However, never explicitly provide the reference answer, nor do polite phrases be required. Only respond with concise feedback in chat style. Finally, score the overall quality of the answer from 1 to 10, where 1 is the worst and 10 is the best. *Format* ### Feedback Overall Score: [1-10] [Your feedback] --- ### Feedback Overall Score: """ import pandas as pd dataset = [ {"instruction":"What if the Beatles had never formed as a band?", "completion":"The Beatles are widely credited with starting the British Invasion, a wave of rock and roll bands from the United Kingdom that became popular in America in the 1960s. If the Beatles had never formed, this musical movement may have never happened, and the world may have been exposed to a different kind of music. So, without the Beatles\u2019 fame and success, other bands wouldn\u2019t have been able to break into the American music scene and create a wider range of sounds. We could have ended up with a less interesting mix of songs playing on the radio."}, {"source":"fair_eval","instruction":"Given that f(x) = 5x^3 - 2x + 3, find the value of f(2).","completion":"The problem is asking us to find the value of the function f(x) = 5x^3 - 2x + 3 at the point x = 2. \n\nStep 1: Substitute x with 2 in the function\nWe replace every x in the function with 2. This gives us:\nf(2) = 5(2)^3 - 2(2) + 3\n\nStep 2: Simplify the expression\nNext, we simplify the expression by performing the operations in order from left to right.\n\nFirst, calculate the cube of 2, which is 8. Substitute this back into the expression:\nf(2) = 5(8) - 4 + 3\n\nThen, multiply 5 by 8 which gives us 40:\nf(2) = 40 - 4 + 3\n\nFinally, subtract 4 from 40 which gives us 36, and then add 3 to that:\nf(2) = 36 + 3\n\nStep 3: Final calculation\nNow, add 36 and 3 together:\nf(2) = 39\n\nSo, the value of the function f(x) = 5x^3 - 2x + 3 at the point x = 2 is 39."} ] import torch def generate_feedback(example): system_prompt = "User: A one-turn chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, very detailed, and polite answers to the user's questions.</s>" conv = [system_prompt] conv.append("User: " + ultracm_instruction_template.format( instruction=example["instruction"], completion=example["completion"], ) + "</s>") conv.append("Assistant: ") prompt = "\n".join(conv) with torch.no_grad(): response = generator(prompt, num_return_sequences=1, return_full_text=False, handle_long_generation="hole", temperature=1.0, top_p=1.0, max_new_tokens=1024, repetition_penalty=1.2, do_sample=True) response = response[0]["generated_text"].strip("\n").strip() print(response) from transformers import pipeline, LlamaTokenizer, LlamaForCausalLM tokenizer = LlamaTokenizer.from_pretrained("openbmb/UltraCM-13b") model = LlamaForCausalLM.from_pretrained("openbmb/UltraCM-13b", device_map="auto") generator = pipeline("text-generation", model=model, tokenizer=tokenizer) for example in dataset: generate_feedback(example) ``` # Citation ``` @misc{cui2023ultrafeedback, title={UltraFeedback: Boosting Language Models with High-quality Feedback}, author={Ganqu Cui and Lifan Yuan and Ning Ding and Guanming Yao and Wei Zhu and Yuan Ni and Guotong Xie and Zhiyuan Liu and Maosong Sun}, year={2023}, eprint={2310.01377}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
openbmb/UltraRM-13b
openbmb
2023-10-14T09:49:47Z
709
56
transformers
[ "transformers", "pytorch", "llama", "dataset:openbmb/UltraFeedback", "arxiv:2310.01377", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2023-09-22T09:34:58Z
--- license: mit datasets: - openbmb/UltraFeedback --- # News - [2023/09/26]: UltraRM unleashes the power of [UltraLM-13B-v2.0](https://huggingface.co/openbmb/UltraLM-13b-v2.0) and [UltraLM-13B](https://huggingface.co/openbmb/UltraLM-13b)! A simple best-of-16 sampling achieves **92.30%** (UltraLM2, 🥇 in 13B results) and **91.54%** (UltraLM, 🥇 in LLaMA-1 results) win rates against text-davinci-003 on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark! - [2023/09/26]: We release the [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, along with UltraFeedback-powered reward model [UltraRM](https://huggingface.co/datasets/openbmb/UltraFeedback) and critique model [UltraCM](https://huggingface.co/datasets/openbmb/UltraCM-13b)! Both built **new SOTAs** over open-source models! # Links - 🤗 [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) - 🤗 [UltraRM](https://huggingface.co/datasets/openbmb/UltraRM-13b) - 🤗 [UltraCM](https://huggingface.co/datasets/openbmb/UltraCM-13b) # UltraRM We train and release a reward model UltraRM based on UltraFeedback to further facilitate alignment research. UltraRM is initialized by LLaMA2-13B. Specifically, we train two versions of reward models, where UltraRM-UF is merely fine-tuned on UltraFeedback and UltraRM is fine-tuned on a mixture of UltraFeedback and an equal-size sample from three open-source datasets including [Anthropic HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), [Standford SHP](https://huggingface.co/datasets/stanfordnlp/SHP), and [Summarization](https://huggingface.co/datasets/openai/summarize_from_feedback). ## Reward Modeling On four public preference test sets, our UltraRM achieves SOTA over other open-source reward models. ## Usage ```python from transformers import PreTrainedModel, LlamaConfig, LlamaModel, LlamaTokenizer import torch.nn as nn import torch from typing import Optional, List class LlamaRewardModel(PreTrainedModel): config_class = LlamaConfig def __init__(self, config): super().__init__(config) self.model = LlamaModel(config) self.regression_head = nn.Linear(self.config.hidden_size, 1, bias=False) def forward( # args are the same as LlamaForCausalLM self, input_ids: torch.LongTensor = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[List[torch.FloatTensor]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ): transformer_outputs = self.model( input_ids, attention_mask=attention_mask, position_ids=position_ids, past_key_values=past_key_values, inputs_embeds=inputs_embeds, ) hidden_states = transformer_outputs[0] rewards = self.regression_head(hidden_states).squeeze(-1) ends = attention_mask.cumsum(dim=1).argmax(dim=1).view(-1,1) rewards = torch.gather(rewards, 1, ends) return rewards ultrarm_template = """Human: {instruction} Assistant: {completion}""" dataset = [ # cases in webgpt {"chosen":"Human: \"Who orders martinis \"\"shaken, not stirred\"\"?\"\nAssistant: Sean Connery's character, fictional British Secret Service agent James Bond, in the movie Goldfinger, stated that he preferred his martini to be \"shaken, not stirred\". [1] Some believe that Bond ordered his martini shaken because of the vodka it contained, as vodka was, for the most part, refined from potatoes (cheaper brands) which made the vodka oily. To disperse the oil, Bond ordered his martinis shaken. [2]","rejected":"Human: \"Who orders martinis \"\"shaken, not stirred\"\"?\"\nAssistant: Fleming's fictional British Secret Service agent James Bond orders his martini cocktail shaken, not stirred [1]. Bond's preferences for his martini are carried over to the films, where his orders are seen in both the 1961 film Dr. No and the 2006 film Casino Royale [1, 2]. In both films, Bond's subordinates copy his order, telling the bartender to keep the fruit with their drinks [2]. However, in the 2006 film, Bond appears irritated when the bartender asks if he would like his drink shaken or stirred [2]."}, {"chosen":"Human: Sural relates to which part of the body?\nAssistant: The sural region is the muscular swelling of the back of the leg below the knee, formed chiefly by the bellies of the gastrocnemius and soleus muscles [1,2].","rejected":"Human: Sural relates to which part of the body?\nAssistant: The Sural nerve runs down the side of the leg near the small saphenous vein, then passes forward below the lateral malleolus and continues on the outside of the foot as the lateral dorsal cutaneous nerve, which then communicates with the intermediate dorsal cutaneous nerve, which branches off to the side of the foot. [1]"} ] tokenizer = LlamaTokenizer.from_pretrained("/data/UltraRM-13b") model = LlamaRewardModel.from_pretrained("/data/UltraRM-13b") for example in dataset: inputs = tokenizer(example["chosen"], return_tensors="pt") chosen_reward = model(**inputs).item() inputs = tokenizer(example["rejected"], return_tensors="pt") rejected_reward = model(**inputs).item() print(chosen_reward - rejected_reward) # Output 1: 2.4158712085336447 # Output 2: 0.1896953582763672 ``` ## Citation ``` @misc{cui2023ultrafeedback, title={UltraFeedback: Boosting Language Models with High-quality Feedback}, author={Ganqu Cui and Lifan Yuan and Ning Ding and Guanming Yao and Wei Zhu and Yuan Ni and Guotong Xie and Zhiyuan Liu and Maosong Sun}, year={2023}, eprint={2310.01377}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```