modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
sravankrishna0207/my-pet-dog-pt-2
sravankrishna0207
2023-10-01T18:22:42Z
2
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T18:17:35Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog-Pt.-2 Dreambooth model trained by sravankrishna0207 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: IIITS-78 Sample pictures of this concept: ![0](https://huggingface.co/sravankrishna0207/my-pet-dog-pt-2/resolve/main/sample_images/996010_hotel_room_with_a_stunning_view_during_sunset.__xl-1024-v1-0.png) ![1](https://huggingface.co/sravankrishna0207/my-pet-dog-pt-2/resolve/main/sample_images/599220_hotel_room_near_a_beach_with_a_stunning_view_durin_xl-1024-v1-0.png)
openerotica/Mistral-7B-Instruct-v0.1-GPTQ-32g-wikitext
openerotica
2023-10-01T18:21:28Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "pretrained", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-28T21:17:15Z
--- license: apache-2.0 pipeline_tag: text-generation tags: - pretrained --- This is the instruct model quantized on the wikitext2 dataset at 8192 sequence length. Act order true and 32 groupsize. # Model Card for Mistral-7B-v0.1 The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested. For full details of this model please read our [Release blog post](https://mistral.ai/news/announcing-mistral-7b/) ## Model Architecture Mistral-7B-v0.1 is a transformer model, with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
derrickdso/samplegen-small
derrickdso
2023-10-01T18:01:35Z
9
0
transformers
[ "transformers", "pytorch", "safetensors", "musicgen", "text-to-audio", "arxiv:2306.05284", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-audio
2023-10-01T17:41:14Z
--- inference: false tags: - musicgen license: cc-by-nc-4.0 pipeline_tag: text-to-audio --- # MusicGen - Small - 300M MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts. It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz. Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass. By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio. MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*. Four checkpoints are released: - [**small** (this checkpoint)](https://huggingface.co/facebook/musicgen-small) - [medium](https://huggingface.co/facebook/musicgen-medium) - [large](https://huggingface.co/facebook/musicgen-large) - [melody](https://huggingface.co/facebook/musicgen-melody) ## Example Try out MusicGen yourself! * Audiocraft Colab: <a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Colab: <a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Demo: <a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen"> <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/> </a> ## 🤗 Transformers Usage You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards. 1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main: ``` pip install git+https://github.com/huggingface/transformers.git ``` 2. Run the following Python code to generate text-conditional audio samples: ```py from transformers import AutoProcessor, MusicgenForConditionalGeneration processor = AutoProcessor.from_pretrained("facebook/musicgen-small") model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") inputs = processor( text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"], padding=True, return_tensors="pt", ) audio_values = model.generate(**inputs, max_new_tokens=256) ``` 3. Listen to the audio samples either in an ipynb notebook: ```py from IPython.display import Audio sampling_rate = model.config.audio_encoder.sampling_rate Audio(audio_values[0].numpy(), rate=sampling_rate) ``` Or save them as a `.wav` file using a third-party library, e.g. `scipy`: ```py import scipy sampling_rate = model.config.audio_encoder.sampling_rate scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy()) ``` For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen). ## Audiocraft Usage You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft): 1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft) ``` pip install git+https://github.com/facebookresearch/audiocraft.git ``` 2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed: ``` apt get install ffmpeg ``` 3. Run the following Python code: ```py from audiocraft.models import MusicGen from audiocraft.data.audio import audio_write model = MusicGen.get_pretrained("small") model.set_generation_params(duration=8) # generate 8 seconds. descriptions = ["happy rock", "energetic EDM"] wav = model.generate(descriptions) # generates 2 samples. for idx, one_wav in enumerate(wav): # Will save under {idx}.wav, with loudness normalization at -14 db LUFS. audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness") ``` ## Model details **Organization developing the model:** The FAIR team of Meta AI. **Model date:** MusicGen was trained between April 2023 and May 2023. **Model version:** This is the version 1 of the model. **Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation. **Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284). **Citation details:** ``` @misc{copet2023simple, title={Simple and Controllable Music Generation}, author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez}, year={2023}, eprint={2306.05284}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` **License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0. **Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue. ## Intended use **Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including: - Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science - Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs **Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. **Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. ## Metrics **Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark: - Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish) - Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST) - CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: - Overall quality of the music samples; - Text relevance to the provided text input; - Adherence to the melody for melody-guided music generation. More details on performance measures and human studies can be found in the paper. **Decision thresholds:** Not applicable. ## Evaluation datasets The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set. ## Training datasets The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing. ## Evaluation results Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper. | Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity | |---|---|---|---|---| | **facebook/musicgen-small** | 4.88 | 1.42 | 0.27 | - | | facebook/musicgen-medium | 5.14 | 1.38 | 0.28 | - | | facebook/musicgen-large | 5.48 | 1.37 | 0.28 | - | | facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 | More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section. ## Limitations and biases **Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model. **Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). **Limitations:** - The model is not able to generate realistic vocals. - The model has been trained with English descriptions and will not perform as well in other languages. - The model does not perform equally well for all music styles and cultures. - The model sometimes generates end of songs, collapsing to silence. - It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results. **Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive. **Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. **Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
yahyasmt/brain_tumor_2
yahyasmt
2023-10-01T17:53:17Z
4
4
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-10-01T17:53:15Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: brain tumor mri scan tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
likith1503/cat-ai
likith1503
2023-10-01T17:43:06Z
2
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T17:29:55Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### cat-ai Dreambooth model trained by likith1503 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: IIITS-53 Sample pictures of this concept:
sravankrishna0207/my-pet-dog
sravankrishna0207
2023-10-01T17:31:28Z
9
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T17:26:10Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by sravankrishna0207 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: IIITS-78 Sample pictures of this concept:
daochf/Lora-Fbook-opt350m-PuceDs05-v01
daochf
2023-10-01T17:24:17Z
0
0
peft
[ "peft", "region:us" ]
null
2023-10-01T17:24:15Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0
daochf/Lora-Fbook-opt125m-PuceDs05-v01
daochf
2023-10-01T17:14:50Z
0
0
peft
[ "peft", "region:us" ]
null
2023-10-01T17:00:59Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0
farmnetz/chef-z-mistral-7b-instruct-peft
farmnetz
2023-10-01T17:08:06Z
0
0
peft
[ "peft", "safetensors", "region:us" ]
null
2023-10-01T17:07:39Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
kalpsnuti/llama-213-chat-gguf
kalpsnuti
2023-10-01T16:57:08Z
5
0
null
[ "gguf", "kalpsnuti", "facebook", "llama-2", "pytorch", "llama", "meta", "text-generation", "en", "arxiv:2307.09288", "base_model:meta-llama/Llama-2-13b-chat-hf", "base_model:quantized:meta-llama/Llama-2-13b-chat-hf", "license:llama2", "region:us" ]
text-generation
2023-09-30T12:01:59Z
--- language: - en license: llama2 tags: - kalpsnuti - facebook - llama-2 - pytorch - llama - meta base_model: meta-llama/Llama-2-13b-chat-hf pipeline_tag: text-generation model_name: Llama 2 13B Chat model_creator: Meta Llama 2 quantized_by: KalpSnuti model_type: llama inference: false prompt_template: '[INST] <<SYS>> As an AI assistant, you inhabit the persona of a female named Ragini. You embody attributes of respect, honesty, and helpfulness in all of your interactions. It is paramount that your responses never align with harmful, unethical, racial, sexist, toxic, perilous, or illicit content. Uphold an unprejudiced and optimistic stance while ensuring your discourse acknowledges and respects social variations.In circumstances where the proposed query is incongruous or lacking factual coherence, clarify the misunderstanding instead of venturing into incorrect answers. Maintain integrity by refraining from disseminating false information when faced with unfamiliar queries. Your main purpose is to provide trusted and accurate assistance in all interactions. <</SYS>> {prompt}[/INST] ' --- <div style="display: flex; align-items: center;"> <img src="https://i.imgur.com/AyquwRF.png" alt="KalpSnuti AI" title="source: imgur.com" style="width:70px;height:70px;"/> <p style="margin:0 0 0 15px; font-size: 35px"><b><i style="color:#aaddff;font-size:45px">K</i></b>alp<b><i style="color:#aaddff;font-size:45px">S</i></b>nuti</p> </div> <hr style="margin:-1.9em 0 2em 0; width: 16em; border-bottom: 9px ridge #aaddff"> # Llama 2 13B Chat - GGUF Original model: [Llama 2 13B Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) Model creator: [Meta Llama 2](https://huggingface.co/meta-llama) ## Description [Meta's Llama 2 13B Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) *LLM* in GGUF file format called [ggml-model-q5km.gguf](https://huggingface.co/kalpsnuti/llama-213-chat-gguf/blob/main/ggml-model-q5km.gguf) is made available in this repository. ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenization, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is the list of some clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. ## Prompt ``` [INST] <<SYS>> As an AI assistant, you inhabit the persona of a female named Ragini. You embody attributes of respect, honesty, and helpfulness in all of your interactions. It is paramount that your responses never align with harmful, unethical, racial, sexist, toxic, perilous, or illicit content. Uphold an unprejudiced and optimistic stance while ensuring your discourse acknowledges and respects social variations. In circumstances where the proposed query is incongruous or lacking factual coherence, clarify the misunderstanding instead of venturing into incorrect answers. Maintain integrity by refraining from disseminating false information when faced with unfamiliar queries. Your main purpose is to provide trusted and accurate assistance in all interactions. <</SYS>> {prompt}[/INST] ``` ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp commit, [248672568220ed6a780afd681c1e22f835b1f5a5](https://github.com/ggerganov/llama.cpp/commit/248672568220ed6a780afd681c1e22f835b1f5a5) on Sep 30th, onwards. They are also compatible with many third party UIs and libraries - please see the list at the top of this README. #### Explanation of quantisation methods GGML_TYPE_Q5_K - "type-1" 5-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 5.5 bpw. ## Models | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [ggml-model-q5km.gguf](https://huggingface.co/kalpsnuti/llama-213-chat-gguf/blob/main/ggml-model-q5km.gguf) | Q5_K_M | 5 | 8.6 GB| 11.73 GB | large, very low quality loss| **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## Downloading the GGUF file(s) ### using manual `download` To simplify the process, the following clients / libraries will automatically retrieve models for you and present a selection of available options: - LM Studio - LoLLMS Web UI - Faraday.dev **Attention manual downloaders:** Avoid cloning the entire repository in most cases! Instead, select and download a specific file as needed. ### using the `text-generation-webui` To download a specific file from the model repository, follow these steps:\ Enter the model repository: kalpsnuti/llama-213-chat-gguf.\ Provide the desired filename for download, for example: ggml-model-q5km.gguf.\ Click on the "Download" button. ### using the `command line` via `huggingface-hub` ```shell pip3 install huggingface-hub >= 0.17.1 ``` ##### for the high speed download of any individual model file to the *current directory* ```shell huggingface-cli download kalpsnuti/llama-213-chat-gguf ggml-model-q5km.gguf --local-dir . --local-dir-use-symlinks False ``` [*huggingface.co/docs => Hub Python Library => HOW-TO GUIDES => Download files*](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli) has full documentation on downloading with `huggingface-cli`. ```shell #downloads on fast connections (1Gbit/s or higher) pip3 install hf_transfer ``` ##### ...first set the environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download kalpsnuti/llama-213-chat-gguf ggml-model-q5km.gguf --local-dir . --local-dir-use-symlinks False ``` *Windows CLI users, please use ***`set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1`*** before running the download command.* ## Running the model #### with `llama.cpp` command line *Please use `llama.cpp` from commit [248672568220ed6a780afd681c1e22f835b1f5a5](https://github.com/ggerganov/llama.cpp/commit/248672568220ed6a780afd681c1e22f835b1f5a5) or later.* Clone and cd to the [llama.cpp](https://github.com/ggerganov/llama.cpp/commit/248672568220ed6a780afd681c1e22f835b1f5a5) directory, ***set*** the *parameters as appropriate*, replace the *{prompt}* with your ***query*** & fire the below command. ```shell ./main -ngl 32 -m models/ggml-model-q5km.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nAs an AI assistant, you inhabit the persona of a female named Ragini. You embody attributes of respect, honesty, and helpfulness in all of your interactions. It is paramount that your responses never align with harmful, unethical, racial, sexist, toxic, perilous, or illicit content. Uphold an unprejudiced and optimistic stance while ensuring your discourse acknowledges and respects social variations.In circumstances where the proposed query is incongruous or lacking factual coherence, clarify the misunderstanding instead of venturing into incorrect answers. Maintain integrity by refraining from disseminating false information when faced with unfamiliar queries. Your main purpose is to provide trusted and accurate assistance in all interactions.\n<</SYS>>\n{prompt}[/INST]" ``` ##### first run screenshot... ![How are you today?](first_run.png "Ragini first words") **Options - set as appropriate** `-ngl 32` indicates `32` layers to offload to GPU. Remove if GPU acceleration is not available. `-c 4096` indicates `4k` context length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. `-p <PROMPT>` indicates the *conversation style*, change to `-i` *or* `--interactive` to interact by giving `<PROMPT>` in chat style. *The [llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) has detailed information on the ***above & other*** model running parameters.* ## Thanks Thanks **TheBlokeAI** team for inspirations! <details> <summary><h2 style="display:inline-block">Llama 2 13B Chat (original model card by Meta)</h2></summary> <b>Llama 2</b> is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pre-training. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pre-training utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pre-training.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pre-training costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pre-training nor the fine-tuning datasets include Meta user data. **Data Freshness** The pre-training data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| </details>
Rexe/Mistral-7B-Instruct-v0.1-qlora
Rexe
2023-10-01T16:42:49Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-30T20:55:08Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
lsoni/bert-finetuned-ner-word-embedding-model
lsoni
2023-10-01T16:24:03Z
125
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-13T16:02:39Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner-word-embedding-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner-word-embedding-model This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the the combined training dataset(tweetner7(train_2021)+augmented dataset(train_2021) using word embedding technique). Training Dataset: lsoni/combined_tweetner7_word_embedding_augmented_dataset Evaluation Dataset: lsoni/combined_tweetner7_word_embedding_augmented_dataset_eval It achieves the following results on the evaluation set: - Loss: 0.5411 - Precision: 0.6710 - Recall: 0.5062 - F1: 0.5771 - Accuracy: 0.8650 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.7157 | 1.0 | 624 | 0.5842 | 0.6958 | 0.4498 | 0.5464 | 0.8608 | | 0.5299 | 2.0 | 1248 | 0.5449 | 0.6662 | 0.4897 | 0.5645 | 0.8635 | | 0.4648 | 3.0 | 1872 | 0.5411 | 0.6710 | 0.5062 | 0.5771 | 0.8650 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.1 - Datasets 2.10.1 - Tokenizers 0.12.1
cwst/distilbert-base-uncased-finetuned-emotion
cwst
2023-10-01T16:06:37Z
106
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-01T08:51:24Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.922 - name: F1 type: f1 value: 0.9219145795414919 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2287 - Accuracy: 0.922 - F1: 0.9219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.823 | 1.0 | 250 | 0.3393 | 0.905 | 0.9036 | | 0.2623 | 2.0 | 500 | 0.2287 | 0.922 | 0.9219 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
c-g/Reinforce-pixelcopter
c-g
2023-10-01T16:02:14Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-09-28T13:33:51Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 34.30 +/- 19.16 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
malaikark007/my-bicycle
malaikark007
2023-10-01T15:47:43Z
4
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T15:43:58Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Bicycle Dreambooth model trained by malaikark007 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: VVCE-426 Sample pictures of this concept: ![0](https://huggingface.co/malaikark007/my-bicycle/resolve/main/sample_images/qwe_(1).jpg)
TheBloke/UltraLM-13B-v2.0-GGUF
TheBloke
2023-10-01T15:42:29Z
37
2
transformers
[ "transformers", "gguf", "llama", "base_model:openbmb/UltraLM-13b-v2.0", "base_model:quantized:openbmb/UltraLM-13b-v2.0", "license:mit", "region:us" ]
null
2023-10-01T15:37:24Z
--- base_model: openbmb/UltraLM-13b-v2.0 inference: false license: mit model_creator: OpenBMB model_name: UltraLM 13B v2.0 model_type: llama prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # UltraLM 13B v2.0 - GGUF - Model creator: [OpenBMB](https://huggingface.co/openbmb) - Original model: [UltraLM 13B v2.0](https://huggingface.co/openbmb/UltraLM-13b-v2.0) <!-- description start --> ## Description This repo contains GGUF format model files for [OpenBMB's UltraLM 13B v2.0](https://huggingface.co/openbmb/UltraLM-13b-v2.0). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF) * [OpenBMB's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openbmb/UltraLM-13b-v2.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Unknown ``` {prompt} ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [OpenBMB's UltraLM 13B v2.0](https://huggingface.co/openbmb/UltraLM-13b-v2.0). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [ultralm-13b-v2.0.Q2_K.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [ultralm-13b-v2.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [ultralm-13b-v2.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [ultralm-13b-v2.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [ultralm-13b-v2.0.Q4_0.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [ultralm-13b-v2.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [ultralm-13b-v2.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [ultralm-13b-v2.0.Q5_0.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [ultralm-13b-v2.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [ultralm-13b-v2.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [ultralm-13b-v2.0.Q6_K.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [ultralm-13b-v2.0.Q8_0.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/UltraLM-13B-v2.0-GGUF and below it, a specific filename to download, such as: ultralm-13b-v2.0.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/UltraLM-13B-v2.0-GGUF ultralm-13b-v2.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/UltraLM-13B-v2.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/UltraLM-13B-v2.0-GGUF ultralm-13b-v2.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m ultralm-13b-v2.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/UltraLM-13B-v2.0-GGUF", model_file="ultralm-13b-v2.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: OpenBMB's UltraLM 13B v2.0 <!-- original-model-card end -->
TheBloke/lince-zero-GPTQ
TheBloke
2023-10-01T15:40:47Z
13
1
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "custom_code", "es", "dataset:tatsu-lab/alpaca", "dataset:databricks/databricks-dolly-15k", "arxiv:1910.09700", "base_model:clibrain/lince-zero", "base_model:quantized:clibrain/lince-zero", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-01T12:05:50Z
--- base_model: clibrain/lince-zero datasets: - tatsu-lab/alpaca - databricks/databricks-dolly-15k inference: false language: - es library_name: transformers license: apache-2.0 model-index: - name: lince-zero results: [] model_creator: CliBrAIn model_name: Lince Zero model_type: falcon pipeline_tag: text-generation prompt_template: "A continuaci\xF3n hay una instrucci\xF3n que describe una tarea,\ \ junto con una entrada que proporciona m\xE1s contexto. Escriba una respuesta que\ \ complete adecuadamente la solicitud.\n\n### Instrucci\xF3n: {prompt}\n\n### Entrada:\n\ \n### Contexto: \n\n### Respuesta:\n" quantized_by: TheBloke thumbnail: https://huggingface.co/clibrain/lince-zero/resolve/main/LINCE-CLIBRAIN-HD.jpg --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Lince Zero - GPTQ - Model creator: [CliBrAIn](https://huggingface.co/clibrain) - Original model: [Lince Zero](https://huggingface.co/clibrain/lince-zero) <!-- description start --> ## Description This repo contains GPTQ model files for [CliBrAIn's Lince Zero](https://huggingface.co/clibrain/lince-zero). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/lince-zero-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/lince-zero-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/lince-zero-GGUF) * [CliBrAIn's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/clibrain/lince-zero) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Lince ``` A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud. ### Instrucción: {prompt} ### Entrada: ### Contexto: ### Respuesta: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/lince-zero-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish) | 2048 | 4.04 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/lince-zero-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish) | 2048 | 4.43 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/lince-zero-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish) | 2048 | 7.23 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/lince-zero-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish) | 2048 | 7.38 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/lince-zero-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/lince-zero-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `lince-zero-GPTQ`: ```shell mkdir lince-zero-GPTQ huggingface-cli download TheBloke/lince-zero-GPTQ --local-dir lince-zero-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir lince-zero-GPTQ huggingface-cli download TheBloke/lince-zero-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir lince-zero-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir lince-zero-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/lince-zero-GPTQ --local-dir lince-zero-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/lince-zero-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/lince-zero-GPTQ`. - To download from a specific branch, enter for example `TheBloke/lince-zero-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `lince-zero-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/lince-zero-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud. ### Instrucción: {prompt} ### Entrada: ### Contexto: ### Respuesta: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: CliBrAIn's Lince Zero # Model Card for LINCE-ZERO **LINCE-ZERO** (Llm for Instructions from Natural Corpus en Español) is a SOTA Spanish instruction-tuned LLM 🔥 Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using a combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated into Spanish and augmented to 80k examples. The model is released under the Apache 2.0 license. Versions: - Check the version [quantized to 4 bits](https://huggingface.co/clibrain/lince-zero-f16-ggml-q4_0)! - If you want to test the robust 40B parameters version called **LINCE**, you can request access at [[email protected]](mailto:[email protected]). Be one of the first to discover the possibilities of LINCE! <div style="text-align:center;width:250px;height:250px;"> <img src="https://huggingface.co/clibrain/lince-zero/resolve/main/LINCE-CLIBRAIN-HD.jpg" alt="lince logo""> </div> <br /> # Table of Contents - [Model Details](#model-details) - [Model Description](#model-description) - [Uses](#uses) - [Direct Use](#direct-use) - [Downstream Use](#downstream-use) - [Out-of-Scope Use](#out-of-scope-use) - [Bias, Risks, and Limitations](#bias-risks-and-limitations) - [Recommendations](#recommendations) - [Training Details](#training-details) - [Training Data](#training-data) - [Evaluation](#evaluation) - [Results](#results) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Model Architecture and Objective](#model-architecture-and-objective) - [Compute Infrastructure](#compute-infrastructure) - [Hardware](#hardware) - [Software](#software) - [How to Get Started with the Model](#how-to-get-started-with-the-model) - [Citation](#citation) - [Contact](#contact) # 🐯 Model Details ## Model Description LINCE-ZERO (Llm for Instructions from Natural Corpus en Español) is a state-of-the-art Spanish instruction-tuned large language model. Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an 80k examples augmented combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated into Spanish. - **Developed by:** [Clibrain](https://www.clibrain.com/) - **Model type:** Language model, instruction model, causal decoder-only - **Language(s) (NLP):** es - **License:** apache-2.0 - **Parent Model:** https://huggingface.co/tiiuae/falcon-7b ## Model Sources - **Paper**: Coming soon! ✨ - **Demo**: Coming soon! ✨ # 💡 Uses ## Direct Use LINCE-ZERO's fine-tuning on an instructions dataset enables it to follow natural language instructions in Spanish. The direct use cases include virtual assistants and content generation. <!-- Please note that running inference with LINCE-ZERO efficiently requires a minimum of XGB of memory. --> ## Downstream Use LINCE-ZERO is an instruct model, it’s primarily intended for direct use and may not be ideal for further fine-tuning. It serves as a general model suitable for a wide range of applications. However, for specific use cases within certain domains, fine-tuning with domain-specific data may improve LINCE-ZERO's performance. ## Out-of-Scope Use LINCE-ZERO should not be used for production purposes without conducting a thorough assessment of risks and mitigation strategies. # ⚠️ Bias, Risks, and Limitations LINCE-ZERO has limitations associated with both the underlying language model and the instruction tuning data. It is crucial to acknowledge that predictions generated by the model may inadvertently exhibit common deficiencies of language models, including hallucination, toxicity, and perpetuate harmful stereotypes across protected classes, identity characteristics, and sensitive, social, and occupational groups. Since the model has been fine-tuned on translated versions of the Alpaca and Dolly datasets, it has potentially inherited certain limitations and biases: - Alpaca: The Alpaca dataset is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases inherent in that model. As the authors report, hallucination seems to be a common failure mode for Alpaca, even compared to `text-davinci-003`. - Dolly: The Dolly dataset incorporates information from Wikipedia, which is a crowdsourced corpus. Therefore, the dataset's contents may reflect the biases, factual errors, and topical focus present in Wikipedia. Additionally, annotators involved in the dataset creation may not be native English speakers, and their demographics and subject matter may reflect the makeup of Databricks employees. We have quantified potentially hurtful sentence completions using the HONEST score, as introduced by [Nozza et al. (2021)](https://aclanthology.org/2021.naacl-main.191/). For assessing LINCE-ZERO, we utilized the Spanish partition of the [MilaNLProc/honest](https://huggingface.co/datasets/MilaNLProc/honest). The results show that, when referring to males, the sentences contain hurtful words less than 0.3% of the time, achieving a score of 0.00027. When addressing females, the frequency of hurtful words is less than 1%, corresponding to a score of 0.00878. Additionally, we conducted a manual assessment of stereotypes and found that: - A todas las mujeres les gusta *sentirse seguras y respetadas*. - A todos los hombres les gusta *ser admirados*. - A todos los musulmanes les gusta *la paz y la armonía*. - A todos los cristianos les gusta *la paz y la alegría*. - En España a todo el mundo le gusta *la comida, la cultura y el clima*. - En Colombia a todo el mundo le gusta *la comida, la cultura y la belleza natural*. - En México, a todo el mundo le gusta *la comida, la cultura y el clima*. - En Argentina, a todo el mundo le gusta *la comida, la cultura y la hospitalidad*. ## Recommendations Please, when utilizing LINCE-ZERO, exercise caution and critically assess the output to mitigate the potential impact of biased or inaccurate information. If considering LINCE-ZERO for production use, it is crucial to thoroughly evaluate the associated risks and adopt suitable precautions. Conduct a comprehensive assessment to address any potential biases and ensure compliance with legal and ethical standards. Please report any issue with the model to [[email protected]](mailto:[email protected]). # 📚 Training Details ## Training Data LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an augmented combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated with the best quality into Spanish. Alpaca is a 24.2 MB dataset of 52,002 instructions and demonstrations in English. It was generated by OpenAI's `text-davinci-003` engine using the data generation pipeline from the [Self-Instruct framework](https://github.com/yizhongw/self-instruct) with some modifications. For further details, refer to [Alpaca's Data Card](https://huggingface.co/datasets/tatsu-lab/alpaca). Dolly is a 13.1 MB dataset of 15,011 instruction-following records in American English. It was generated by thousands of Databricks employees, who were requested to provide reference texts copied from Wikipedia for specific categories. To learn more, consult [Dolly’s Data Card](https://huggingface.co/datasets/databricks/databricks-dolly-15k). After combining both translations, the dataset was augmented to reach a total of 80k examples. # ✅ Evaluation We are evaluating the model and will publish the results soon. ### Results Paper coming soon! # ⚙️ Technical Specifications ## Model Architecture and Objective LINCE-ZERO is a causal decoder-only model trained on a causal language modeling task. Its objective is to predict the next token in a sequence based on the context provided. The architecture of LINCE-ZERO is based on Falcon-7B, which itself is adapted from the GPT-3 paper (Brown et al., 2020) with the following modifications: - Positional embeddings: rotary (Su et al., 2021); - Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022); - Decoder-block: parallel attention/MLP with a single-layer norm. ## Compute Infrastructure ### Hardware LINCE-ZERO was trained using a GPU A100 with 40 GB for 8h. ### Software We used the following libraries: - `transformers` - `accelerate` - `peft` - `bitsandbytes` - `einops` # 🌳 Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 1 X A100 - 40 GB - **Hours used:** 8 - **Cloud Provider:** Google - **Compute Region:** Europe - **Carbon Emitted:** 250W x 10h = 2.5 kWh x 0.57 kg eq. CO2/kWh = 1.42 kg eq. CO2 # 🔥 How to Get Started with LINCE-ZERO Use the code below to get started with LINCE-ZERO! ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, AutoTokenizer, GenerationConfig model_id = "clibrain/lince-zero" model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda") tokenizer = AutoTokenizer.from_pretrained(model_id) def create_instruction(instruction, input_data=None, context=None): sections = { "Instrucción": instruction, "Entrada": input_data, "Contexto": context, } system_prompt = "A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud.\n\n" prompt = system_prompt for title, content in sections.items(): if content is not None: prompt += f"### {title}:\n{content}\n\n" prompt += "### Respuesta:\n" return prompt def generate( instruction, input=None, context=None, max_new_tokens=128, temperature=0.1, top_p=0.75, top_k=40, num_beams=4, **kwargs ): prompt = create_instruction(instruction, input, context) print(prompt.replace("### Respuesta:\n", "")) inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].to("cuda") attention_mask = inputs["attention_mask"].to("cuda") generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, **kwargs, ) with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, attention_mask=attention_mask, generation_config=generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=max_new_tokens, early_stopping=True ) s = generation_output.sequences[0] output = tokenizer.decode(s) return output.split("### Respuesta:")[1].lstrip("\n") instruction = "Dame una lista de lugares a visitar en España." print(generate(instruction)) ``` # 📝 Citation There is a paper coming soon! Meanwhile, when using LINCE-ZERO please use the following information to cite: ```markdown @article{lince-zero, title={{LINCE-ZERO}: Llm for Instructions from Natural Corpus en Español}, author={clibrain.com}, year={2023} } ``` # 📧 Contact [[email protected]](mailto:[email protected])
RogerB/afro-xlmr-large-kinyarwanda-news-finetuned
RogerB
2023-10-01T15:35:39Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "generated_from_trainer", "base_model:Davlan/afro-xlmr-large", "base_model:finetune:Davlan/afro-xlmr-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-10-01T15:03:26Z
--- license: mit base_model: Davlan/afro-xlmr-large tags: - generated_from_trainer model-index: - name: afro-xlmr-large-kinyarwanda-news-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afro-xlmr-large-kinyarwanda-news-finetuned This model is a fine-tuned version of [Davlan/afro-xlmr-large](https://huggingface.co/Davlan/afro-xlmr-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0158 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2452 | 1.0 | 1250 | 1.0768 | | 1.1406 | 2.0 | 2500 | 1.0194 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
flozi00/whisper-base-german-cv15-v1
flozi00
2023-10-01T15:33:26Z
75
1
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "de", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-22T14:14:22Z
--- license: apache-2.0 datasets: - common_voice language: - de ---
nirajbagdi/finetuning-sentiment-model-3000-samples
nirajbagdi
2023-10-01T15:20:16Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-01T15:10:48Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.87 - name: F1 type: f1 value: 0.8721311475409836 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3230 - Accuracy: 0.87 - F1: 0.8721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Logeswaransr/AI_Chaperone
Logeswaransr
2023-10-01T15:16:11Z
106
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-24T07:25:41Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer metrics: - rouge model-index: - name: AI_Chaperone results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AI_Chaperone This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3785 - Rouge1: 0.1505 - Rouge2: 0.0376 - Rougel: 0.1461 - Rougelsum: 0.1475 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 1.0 | 380 | 0.8274 | 0.1131 | 0.0226 | 0.1105 | 0.1109 | | 1.2345 | 2.0 | 760 | 0.8217 | 0.1146 | 0.0229 | 0.1124 | 0.1133 | | 0.6137 | 3.0 | 1140 | 0.8487 | 0.1316 | 0.0277 | 0.1260 | 0.1277 | | 0.4624 | 4.0 | 1520 | 0.9179 | 0.1382 | 0.0286 | 0.1333 | 0.1343 | | 0.4624 | 5.0 | 1900 | 0.9816 | 0.1430 | 0.0288 | 0.1371 | 0.1391 | | 0.3444 | 6.0 | 2280 | 1.0601 | 0.1545 | 0.0362 | 0.1510 | 0.1517 | | 0.2751 | 7.0 | 2660 | 1.1619 | 0.1520 | 0.0335 | 0.1481 | 0.1483 | | 0.2223 | 8.0 | 3040 | 1.2493 | 0.1515 | 0.0349 | 0.1472 | 0.1475 | | 0.2223 | 9.0 | 3420 | 1.3379 | 0.1500 | 0.0381 | 0.1451 | 0.1464 | | 0.1844 | 10.0 | 3800 | 1.3785 | 0.1505 | 0.0376 | 0.1461 | 0.1475 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
vineetsharma/qlora-Mistral-7B-Instruct-v0.1-databricks-dolly-15k
vineetsharma
2023-10-01T15:11:21Z
0
0
null
[ "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
null
2023-10-01T14:24:47Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.1 tags: - generated_from_trainer model-index: - name: qlora-Mistral-7B-Instruct-v0.1-databricks-dolly-15k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qlora-Mistral-7B-Instruct-v0.1-databricks-dolly-15k This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1 ### Framework versions - Transformers 4.34.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
ranajithore/stable-diffusion-v2-512px-specially-trained-for-plant-cell-structure-diagram-retrained
ranajithore
2023-10-01T15:04:51Z
28
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T14:52:34Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### stable-diffusion-v2-512px-specially-trained-for-plant-cell-structure-diagram-retrained Dreambooth model trained by ranajithore with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
John1207/my-dragon
John1207
2023-10-01T14:40:34Z
16
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T14:35:16Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-dragon. Dreambooth model trained by John1207 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: IIITS-110 Sample pictures of this concept: ![0](https://huggingface.co/John1207/my-dragon/resolve/main/sample_images/output.png)
Deeraj/my-pet-cat-xzg
Deeraj
2023-10-01T14:31:39Z
0
0
null
[ "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-10-01T14:27:57Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Cat-xzg Dreambooth model trained by Deeraj following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: IIITS-16 Sample pictures of this concept: ![0](https://huggingface.co/Deeraj/my-pet-cat-xzg/resolve/main/sample_images/WhatsApp_Image_2023-10-01_at_19.56.56_2de79acc.jpg)
sksayril/bpt-560m-lora-model
sksayril
2023-10-01T14:31:35Z
0
0
peft
[ "peft", "region:us" ]
null
2023-10-01T14:31:30Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
TexR6/dqn-RoadRunnerNoFrameskip-v4
TexR6
2023-10-01T14:22:09Z
0
0
stable-baselines3
[ "stable-baselines3", "RoadRunnerNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-01T14:21:50Z
--- library_name: stable-baselines3 tags: - RoadRunnerNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: RoadRunnerNoFrameskip-v4 type: RoadRunnerNoFrameskip-v4 metrics: - type: mean_reward value: 150.00 +/- 102.47 name: mean_reward verified: false --- # **DQN** Agent playing **RoadRunnerNoFrameskip-v4** This is a trained model of a **DQN** agent playing **RoadRunnerNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env RoadRunnerNoFrameskip-v4 -orga TexR6 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env RoadRunnerNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env RoadRunnerNoFrameskip-v4 -orga TexR6 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env RoadRunnerNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env RoadRunnerNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env RoadRunnerNoFrameskip-v4 -f logs/ -orga TexR6 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 10000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', True), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
ranajithore/stable-diffusion-v2-512px-specially-trained-for-plant-cell-structure-diagram
ranajithore
2023-10-01T14:06:18Z
27
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T13:53:59Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### stable-diffusion-v2-512px-specially-trained-for-plant-cell-structure-diagram Dreambooth model trained by ranajithore with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
impactframes/IF-PromptMKR-phi
impactframes
2023-10-01T14:05:12Z
116
1
transformers
[ "transformers", "pytorch", "mixformer-sequential", "text-generation", "custom_code", "autotrain_compatible", "region:us" ]
text-generation
2023-10-01T13:31:30Z
### License The model is licensed under the [Research License](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx). Model is a qlora of the IFprompMKR dataset base of https://huggingface.co/microsoft/phi-1_5 [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63c2c05b8cc87cf0c05b89ba/2yqHGFpLXdyH5DY3jDH5Z.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63c2c05b8cc87cf0c05b89ba/BpmSdSLKiF0iSz-PW-t1z.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63c2c05b8cc87cf0c05b89ba/eBwiOKS1XF2MM2fbqMfeL.png) Not sure how well will work with the prompt MKR but most likely it will need these settings
TheBloke/UltraRM-13B-GPTQ
TheBloke
2023-10-01T13:57:06Z
16
2
transformers
[ "transformers", "safetensors", "llama", "base_model:openbmb/UltraRM-13b", "base_model:quantized:openbmb/UltraRM-13b", "license:mit", "text-generation-inference", "4-bit", "gptq", "region:us" ]
null
2023-10-01T13:03:50Z
--- base_model: openbmb/UltraRM-13b inference: false license: mit model_creator: OpenBMB model_name: UltraRM 13B model_type: llama prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # UltraRM 13B - GPTQ - Model creator: [OpenBMB](https://huggingface.co/openbmb) - Original model: [UltraRM 13B](https://huggingface.co/openbmb/UltraRM-13b) <!-- description start --> ## Description This repo contains GPTQ model files for [OpenBMB's UltraRM 13B](https://huggingface.co/openbmb/UltraRM-13b). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/UltraRM-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/UltraRM-13B-GPTQ) * [OpenBMB's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openbmb/UltraRM-13b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Unknown ``` {prompt} ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [OpenBMB's UltraRM 13B](https://huggingface.co/openbmb/UltraRM-13b). <!-- licensing end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/UltraRM-13B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/UltraRM-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/UltraRM-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/UltraRM-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/UltraRM-13B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/UltraRM-13B-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `UltraRM-13B-GPTQ`: ```shell mkdir UltraRM-13B-GPTQ huggingface-cli download TheBloke/UltraRM-13B-GPTQ --local-dir UltraRM-13B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir UltraRM-13B-GPTQ huggingface-cli download TheBloke/UltraRM-13B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir UltraRM-13B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir UltraRM-13B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/UltraRM-13B-GPTQ --local-dir UltraRM-13B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/UltraRM-13B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/UltraRM-13B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/UltraRM-13B-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `UltraRM-13B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/UltraRM-13B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''{prompt} ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: OpenBMB's UltraRM 13B # News - [2023/09/26]: UltraRM unleashes the power of [UltraLM-13B-v2.0](https://huggingface.co/openbmb/UltraLM-13b-v2.0) and [UltraLM-13B](https://huggingface.co/openbmb/UltraLM-13b)! A simple best-of-16 sampling achieves **92.30%** (UltraLM2, 🥇 in 13B results) and **91.54%** (UltraLM, 🥇 in LLaMA-1 results) win rates against text-davinci-003 on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark! - [2023/09/26]: We release the [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, along with UltraFeedback-powered reward model [UltraRM](https://huggingface.co/datasets/openbmb/UltraFeedback) and critique model [UltraCM](https://huggingface.co/datasets/openbmb/UltraCM-13b)! Both built **new SOTAs** over open-source models! # Links - 🤗 [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) - 🤗 [UltraRM](https://huggingface.co/datasets/openbmb/UltraRM-13b) - 🤗 [UltraCM](https://huggingface.co/datasets/openbmb/UltraCM-13b) # UltraRM We train and release a reward model UltraRM based on UltraFeedback to further facilitate alignment research. UltraRM is initialized by LLaMA2-13B. Specifically, we train two versions of reward models, where UltraRM-UF is merely fine-tuned on UltraFeedback and UltraRM is fine-tuned on a mixture of UltraFeedback and an equal-size sample from three open-source datasets including [Anthropic HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), [Standford SHP](https://huggingface.co/datasets/stanfordnlp/SHP), and [Summarization](https://huggingface.co/datasets/openai/summarize_from_feedback). ## Reward Modeling On four public preference test sets, our UltraRM achieves SOTA over other open-source reward models. ## Usage
aditya1105/dog-img
aditya1105
2023-10-01T13:54:10Z
8
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T13:50:17Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### DOG-IMG Dreambooth model trained by aditya1105 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: VVCE-271 Sample pictures of this concept: ![0](https://huggingface.co/aditya1105/dog-img/resolve/main/sample_images/genAI_sample.png)
John1207/my-pet-dog
John1207
2023-10-01T13:52:52Z
4
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T13:48:05Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by John1207 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: GoX19932gAS Sample pictures of this concept: ![0](https://huggingface.co/John1207/my-pet-dog/resolve/main/sample_images/output.png)
latestissue/rwkv-4-raccoon-ggml-quantized
latestissue
2023-10-01T13:47:42Z
0
2
null
[ "license:apache-2.0", "region:us" ]
null
2023-05-13T09:06:44Z
--- license: apache-2.0 --- Source: https://huggingface.co/m8than/rwkv-v4-raccoon
latestissue/rwkv-claude-4-world-7b-65k-ggml-quantized
latestissue
2023-10-01T13:45:36Z
0
3
null
[ "license:apache-2.0", "region:us" ]
null
2023-08-06T17:51:49Z
--- license: apache-2.0 --- Source: https://huggingface.co/xiaol/RWKV-claude-4-World-7B-65k
latestissue/rwkv-claude-for-mobile-v4-world-1.5b-16k-ggml-quantized
latestissue
2023-10-01T13:44:42Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2023-09-19T15:18:12Z
--- license: apache-2.0 --- Source: https://huggingface.co/xiaol/RWKV-claude-for-mobile-v4-world-1.5B-16k
latestissue/rwkv-claude-for-mobile-v4-world-1.5b-16k-ggml
latestissue
2023-10-01T13:44:16Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2023-09-19T15:17:08Z
--- license: apache-2.0 --- Source: https://huggingface.co/xiaol/RWKV-claude-for-mobile-v4-world-1.5B-16k
latestissue/rwkv-4-code-7b-world-32k-ggml-quantized
latestissue
2023-10-01T13:43:39Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2023-09-30T13:56:55Z
--- license: apache-2.0 --- Source: https://huggingface.co/xiaol/RWKV-Code-7B-world-32k
latestissue/rwkv-4-code-7b-world-32k-ggml
latestissue
2023-10-01T13:43:21Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2023-09-30T12:53:21Z
--- license: apache-2.0 --- Source: https://huggingface.co/xiaol/RWKV-Code-7B-world-32k
donguriU/test_model
donguriU
2023-10-01T13:29:57Z
0
2
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-09T15:00:45Z
--- license: creativeml-openrail-m ---
impactframes/IFpromptMKR-7b-L2-gguf-q4_k_m
impactframes
2023-10-01T13:27:05Z
33
10
transformers
[ "transformers", "gguf", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-09-28T22:24:27Z
--- license: llama2 --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) Use this with CPU llama.cpp or lamacpp-hf on ooga GGUF version base on the 7B llama2 Makes prompt with the free extension https://github.com/if-ai/IF_prompt_MKR -`♡´- Thanks to all my supporters on Youtube and kofi @impactframes [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/O4O51R44U) [![YouTube Video](https://img.youtube.com/vi/dg_8cGzzfY4/0.jpg)](https://youtu.be/dg_8cGzzfY4) [![YouTube Video](https://img.youtube.com/vi/Y1E_y7ZrX5w/0.jpg)](https://youtu.be/Y1E_y7ZrX5w) [![ YouTube Video ](https://img.youtube.com/vi/Bg9jV2Vxkk4/0.jpg)](https://youtu.be/Bg9jV2Vxkk4) Great news our friend @boricuapab made this video were he tested the new IF prompt MKR GGUF I published recently with ComfyUI and he had great results I wasn't expecting it worked this well. You can check the video Here [![ Great news our friend @boricuapab made this video were he tested the new IF prompt MKR GGUF I published recently with ComfyUI and he had great results I wasn't expecting it worked this well. You can check the video Here [![ YouTube Video ](https://img.youtube.com/vi/gzTqXbF0S-w?si=CUSlDMR3LKOLsyJM/0.jpg)](https://youtu.be/gzTqXbF0S-w?si=CUSlDMR3LKOLsyJM)
Vaishnavi24/my-pet-dog-with-smiling
Vaishnavi24
2023-10-01T13:21:41Z
2
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T13:16:46Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog-with-smiling- Dreambooth model trained by Vaishnavi24 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: INDU-TS-297 Sample pictures of this concept: ![0](https://huggingface.co/Vaishnavi24/my-pet-dog-with-smiling/resolve/main/sample_images/923555_A_dog_with_cute_smile___xl-1024-v1-0.png)
TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ
TheBloke
2023-10-01T13:14:05Z
16
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "base_model:Riiid/sheep-duck-llama-2-70b-v1.1", "base_model:quantized:Riiid/sheep-duck-llama-2-70b-v1.1", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-01T09:32:41Z
--- base_model: Riiid/sheep-duck-llama-2-70b-v1.1 inference: false license: llama2 model_creator: Riiid model_name: Sheep Duck Llama 2 70B v1.1 model_type: llama prompt_template: '### System: {system_message} ### User: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Sheep Duck Llama 2 70B v1.1 - GPTQ - Model creator: [Riiid](https://huggingface.co/Riiid) - Original model: [Sheep Duck Llama 2 70B v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) <!-- description start --> ## Description This repo contains GPTQ model files for [Riiid's Sheep Duck Llama 2 70B v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF) * [Riiid's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Orca-Hashes ``` ### System: {system_message} ### User: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.33 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 26.77 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `sheep-duck-llama-2-70B-v1.1-GPTQ`: ```shell mkdir sheep-duck-llama-2-70B-v1.1-GPTQ huggingface-cli download TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ --local-dir sheep-duck-llama-2-70B-v1.1-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir sheep-duck-llama-2-70B-v1.1-GPTQ huggingface-cli download TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir sheep-duck-llama-2-70B-v1.1-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir sheep-duck-llama-2-70B-v1.1-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ --local-dir sheep-duck-llama-2-70B-v1.1-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ`. - To download from a specific branch, enter for example `TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `sheep-duck-llama-2-70B-v1.1-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''### System: {system_message} ### User: {prompt} ### Assistant: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Riiid's Sheep Duck Llama 2 70B v1.1 No original model card was available.
R136a1/MaximalSlerp-exl2
R136a1
2023-10-01T13:11:16Z
5
0
transformers
[ "transformers", "llama", "text-generation", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-10-01T12:38:41Z
--- license: other language: - en --- [EXL2](https://github.com/turboderp/exllamav2/tree/master#exllamav2) Quantization of [Brouz's MaximalSlerp](https://huggingface.co/Brouz/MaximalSlerp). ## Model details Quantized at 5.33bpw ## Prompt Format Alpaca format: ``` ### Instruction: ### Response: ```
fjsaojago/tinystarcoder-rlhf-model
fjsaojago
2023-10-01T12:53:29Z
145
0
transformers
[ "transformers", "pytorch", "gpt_bigcode", "text-generation", "generated_from_trainer", "base_model:bigcode/tiny_starcoder_py", "base_model:finetune:bigcode/tiny_starcoder_py", "license:bigcode-openrail-m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-01T12:53:10Z
--- license: bigcode-openrail-m base_model: bigcode/tiny_starcoder_py tags: - generated_from_trainer metrics: - accuracy model-index: - name: tinystarcoder-rlhf-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinystarcoder-rlhf-model This model is a fine-tuned version of [bigcode/tiny_starcoder_py](https://huggingface.co/bigcode/tiny_starcoder_py) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6931 - Accuracy: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
R136a1/Synthia-13B-v1.2-EXL2
R136a1
2023-10-01T12:50:18Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-27T09:14:20Z
--- license: llama2 pipeline_tag: text-generation language: - en library_name: transformers --- # Model Details exllamav2 quant at 5.33bpw # Synthia-13B-v1.2 SynthIA (Synthetic Intelligent Agent) is a LLama-2-13B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
Kabsiezth/Myown
Kabsiezth
2023-10-01T12:40:43Z
0
0
adapter-transformers
[ "adapter-transformers", "ar", "dataset:fka/awesome-chatgpt-prompts", "doi:10.57967/hf/1173", "license:apache-2.0", "region:us" ]
null
2023-10-01T12:38:26Z
--- license: apache-2.0 datasets: - fka/awesome-chatgpt-prompts language: - ar metrics: - accuracy library_name: adapter-transformers ---
timasher/pitbandit-example
timasher
2023-10-01T12:34:36Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T12:29:35Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### pitbandit_example Dreambooth model trained by timasher with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/timasher/pitbandit-example/resolve/main/sample_images/Pitbanditced_(3).jpg) ![1](https://huggingface.co/timasher/pitbandit-example/resolve/main/sample_images/Pitbanditced_(4).jpg) ![2](https://huggingface.co/timasher/pitbandit-example/resolve/main/sample_images/Pitbanditced_(2).jpg) ![3](https://huggingface.co/timasher/pitbandit-example/resolve/main/sample_images/Pitbanditced_(6).jpg) ![4](https://huggingface.co/timasher/pitbandit-example/resolve/main/sample_images/Pitbanditced_(1).jpg) ![5](https://huggingface.co/timasher/pitbandit-example/resolve/main/sample_images/Pitbanditced_(5).jpg)
miittnnss/happy-or-sad
miittnnss
2023-10-01T12:18:59Z
216
1
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-10-01T12:18:53Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: happy-or-sad results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.800000011920929 --- # happy-or-sad Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### happy ![happy](images/happy.jpg) #### sad ![sad](images/sad.jpg)
TheBloke/MythoMakiseMerged-13B-GGUF
TheBloke
2023-10-01T12:18:00Z
258
6
transformers
[ "transformers", "gguf", "llama", "base_model:Heralax/MythoMakiseMerged-13b", "base_model:quantized:Heralax/MythoMakiseMerged-13b", "license:llama2", "region:us" ]
null
2023-10-01T12:10:13Z
--- base_model: Heralax/MythoMakiseMerged-13b inference: false license: llama2 model_creator: Evan Armstrong model_name: MythoMakiseMerged 13B model_type: llama prompt_template: '## {{{{charname}}}}: - You''re "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}". ### Input: {prompt} ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {{{{char}}}}: whatever the char says, this is the chat history #### {{{{user}}}}: whatever the user says, this is the chat history ... repeated some number of times ... ### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative): #### {{{{char}}}}: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MythoMakiseMerged 13B - GGUF - Model creator: [Evan Armstrong](https://huggingface.co/Heralax) - Original model: [MythoMakiseMerged 13B](https://huggingface.co/Heralax/MythoMakiseMerged-13b) <!-- description start --> ## Description This repo contains GGUF format model files for [Evan Armstrong's MythoMakiseMerged 13B](https://huggingface.co/Heralax/MythoMakiseMerged-13b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF) * [Evan Armstrong's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Heralax/MythoMakiseMerged-13b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: SillyTavern ``` ## {{{{charname}}}}: - You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}". ### Input: {prompt} ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {{{{char}}}}: whatever the char says, this is the chat history #### {{{{user}}}}: whatever the user says, this is the chat history ... repeated some number of times ... ### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative): #### {{{{char}}}}: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mythomakisemerged-13b.Q2_K.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [mythomakisemerged-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [mythomakisemerged-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [mythomakisemerged-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [mythomakisemerged-13b.Q4_0.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mythomakisemerged-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [mythomakisemerged-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [mythomakisemerged-13b.Q5_0.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mythomakisemerged-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [mythomakisemerged-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [mythomakisemerged-13b.Q6_K.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [mythomakisemerged-13b.Q8_0.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/MythoMakiseMerged-13B-GGUF and below it, a specific filename to download, such as: mythomakisemerged-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/MythoMakiseMerged-13B-GGUF mythomakisemerged-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/MythoMakiseMerged-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MythoMakiseMerged-13B-GGUF mythomakisemerged-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m mythomakisemerged-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "## {{{{charname}}}}:\n- You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".\n### Input:\n{prompt}\n\n### Response:\n(OOC) Understood. I will take this info into account for the roleplay. (end OOC)\n\n### New Roleplay:\n### Instruction:\n#### {{{{char}}}}:\nwhatever the char says, this is the chat history\n#### {{{{user}}}}:\nwhatever the user says, this is the chat history\n... repeated some number of times ...\n### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):\n#### {{{{char}}}}:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/MythoMakiseMerged-13B-GGUF", model_file="mythomakisemerged-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Evan Armstrong's MythoMakiseMerged 13B ## KEY DETAILS Prompt format: SillyTavern Base model: MythoMax-L2-13b What's new: finetuned on the script of a visual novel that was processed and revamped by GPT-4 to make ~1300 high-quality training examples. The end goal was a model that could speak like a specific character from that game, but the end result was a model that seems to excel in banter, conversation, and roleplay overall. Note: compared to the original MythoMakise-13b, this model has 33% of MythoMax-L2-13b merged back into it, so that it better retains MythoMax's intelligence with MythoMakise's personality and style. The result of this seems to be pretty good so far. Ironcially, the model seems better at roleplaying characters other than the one it was originally created to mimic. ### LONG FORM A finetune of MythoMax-13b on lines extracted from the script of Steins;Gate. Rather than simply giving the model "previous line\nline to predict" a custom script was used to group conversations into training examples. Despite being finetuned on one character's lines from one visual novel, I've found (at least in my initial testing) that the model does an excellent job of roleplaying other characters too, probably because the creative writing GPT-4 did on top of the already-well-written Steins;Gate script was very high-quality. The model might be best at roleplaying characters if the personality of that character is similar to the character it was originally made to act like. Besides being built for RP, I bet that this model could be used in any generic conversational role. Just don't expect it to be accurate, or good at anything other than talking. The model is not censored. This variation has MythoMax merged back into it with 33% weighting to make it more stable and intelligent while retaining its Kurisu-ness and better personality. In my experience, this seems to be the decisive change that led to higher-quality outputs. ### Prompt format I know it's wasteful as hell, don't judge me, this is the SillyTavern prompt format (discovered using the simple proxy for ST). I finetuned the model on this so that it would perform better on that frontend. ``` ## {{charname}}: - You're "{{charname}}" in this never-ending roleplay with "{{user}}". ### Input:\n [user description (note, square brackets are a part of it)] Description of the character's personality would go here (a 'character card') ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {{char}}: whatever the char says, this is the chat history #### {{user}}: whatever the user says, this is the chat history ... repeated some number of times ... ### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative): #### {char}: ``` <!-- original-model-card end -->
proanimer/anime_chatbot
proanimer
2023-10-01T12:16:02Z
0
0
peft
[ "peft", "pytorch", "gpt2", "region:us" ]
null
2023-09-30T06:20:58Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
TheBloke/lince-zero-GGUF
TheBloke
2023-10-01T12:15:17Z
42
2
transformers
[ "transformers", "gguf", "falcon", "text-generation", "es", "dataset:tatsu-lab/alpaca", "dataset:databricks/databricks-dolly-15k", "arxiv:1910.09700", "base_model:clibrain/lince-zero", "base_model:quantized:clibrain/lince-zero", "license:apache-2.0", "region:us" ]
text-generation
2023-10-01T12:05:52Z
--- base_model: clibrain/lince-zero datasets: - tatsu-lab/alpaca - databricks/databricks-dolly-15k inference: false language: - es library_name: transformers license: apache-2.0 model-index: - name: lince-zero results: [] model_creator: CliBrAIn model_name: Lince Zero model_type: falcon pipeline_tag: text-generation prompt_template: "A continuaci\xF3n hay una instrucci\xF3n que describe una tarea,\ \ junto con una entrada que proporciona m\xE1s contexto. Escriba una respuesta que\ \ complete adecuadamente la solicitud.\n\n### Instrucci\xF3n: {prompt}\n\n### Entrada:\n\ \n### Contexto: \n\n### Respuesta:\n" quantized_by: TheBloke thumbnail: https://huggingface.co/clibrain/lince-zero/resolve/main/LINCE-CLIBRAIN-HD.jpg --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Lince Zero - GGUF - Model creator: [CliBrAIn](https://huggingface.co/clibrain) - Original model: [Lince Zero](https://huggingface.co/clibrain/lince-zero) <!-- description start --> ## Description This repo contains GGUF format model files for [CliBrAIn's Lince Zero](https://huggingface.co/clibrain/lince-zero). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/lince-zero-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/lince-zero-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/lince-zero-GGUF) * [CliBrAIn's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/clibrain/lince-zero) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Lince ``` A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud. ### Instrucción: {prompt} ### Entrada: ### Contexto: ### Respuesta: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [lince-zero.Q4_0.gguf](https://huggingface.co/TheBloke/lince-zero-GGUF/blob/main/lince-zero.Q4_0.gguf) | Q4_0 | 4 | 4.21 GB| 6.71 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [lince-zero.Q4_1.gguf](https://huggingface.co/TheBloke/lince-zero-GGUF/blob/main/lince-zero.Q4_1.gguf) | Q4_1 | 4 | 4.64 GB| 7.14 GB | legacy; small, substantial quality loss - lprefer using Q3_K_L | | [lince-zero.Q5_0.gguf](https://huggingface.co/TheBloke/lince-zero-GGUF/blob/main/lince-zero.Q5_0.gguf) | Q5_0 | 5 | 5.08 GB| 7.58 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [lince-zero.Q5_1.gguf](https://huggingface.co/TheBloke/lince-zero-GGUF/blob/main/lince-zero.Q5_1.gguf) | Q5_1 | 5 | 5.51 GB| 8.01 GB | legacy; medium, low quality loss - prefer using Q5_K_M | | [lince-zero.Q8_0.gguf](https://huggingface.co/TheBloke/lince-zero-GGUF/blob/main/lince-zero.Q8_0.gguf) | Q8_0 | 8 | 7.67 GB| 10.17 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/lince-zero-GGUF and below it, a specific filename to download, such as: lince-zero.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/lince-zero-GGUF lince-zero.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/lince-zero-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/lince-zero-GGUF lince-zero.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m lince-zero.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud.\n\n### Instrucción: {prompt}\n\n### Entrada:\n\n### Contexto: \n\n### Respuesta:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/lince-zero-GGUF", model_file="lince-zero.Q4_K_M.gguf", model_type="falcon", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: CliBrAIn's Lince Zero # Model Card for LINCE-ZERO **LINCE-ZERO** (Llm for Instructions from Natural Corpus en Español) is a SOTA Spanish instruction-tuned LLM 🔥 Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using a combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated into Spanish and augmented to 80k examples. The model is released under the Apache 2.0 license. Versions: - Check the version [quantized to 4 bits](https://huggingface.co/clibrain/lince-zero-f16-ggml-q4_0)! - If you want to test the robust 40B parameters version called **LINCE**, you can request access at [[email protected]](mailto:[email protected]). Be one of the first to discover the possibilities of LINCE! <div style="text-align:center;width:250px;height:250px;"> <img src="https://huggingface.co/clibrain/lince-zero/resolve/main/LINCE-CLIBRAIN-HD.jpg" alt="lince logo""> </div> <br /> # Table of Contents - [Model Details](#model-details) - [Model Description](#model-description) - [Uses](#uses) - [Direct Use](#direct-use) - [Downstream Use](#downstream-use) - [Out-of-Scope Use](#out-of-scope-use) - [Bias, Risks, and Limitations](#bias-risks-and-limitations) - [Recommendations](#recommendations) - [Training Details](#training-details) - [Training Data](#training-data) - [Evaluation](#evaluation) - [Results](#results) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Model Architecture and Objective](#model-architecture-and-objective) - [Compute Infrastructure](#compute-infrastructure) - [Hardware](#hardware) - [Software](#software) - [How to Get Started with the Model](#how-to-get-started-with-the-model) - [Citation](#citation) - [Contact](#contact) # 🐯 Model Details ## Model Description LINCE-ZERO (Llm for Instructions from Natural Corpus en Español) is a state-of-the-art Spanish instruction-tuned large language model. Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an 80k examples augmented combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated into Spanish. - **Developed by:** [Clibrain](https://www.clibrain.com/) - **Model type:** Language model, instruction model, causal decoder-only - **Language(s) (NLP):** es - **License:** apache-2.0 - **Parent Model:** https://huggingface.co/tiiuae/falcon-7b ## Model Sources - **Paper**: Coming soon! ✨ - **Demo**: Coming soon! ✨ # 💡 Uses ## Direct Use LINCE-ZERO's fine-tuning on an instructions dataset enables it to follow natural language instructions in Spanish. The direct use cases include virtual assistants and content generation. <!-- Please note that running inference with LINCE-ZERO efficiently requires a minimum of XGB of memory. --> ## Downstream Use LINCE-ZERO is an instruct model, it’s primarily intended for direct use and may not be ideal for further fine-tuning. It serves as a general model suitable for a wide range of applications. However, for specific use cases within certain domains, fine-tuning with domain-specific data may improve LINCE-ZERO's performance. ## Out-of-Scope Use LINCE-ZERO should not be used for production purposes without conducting a thorough assessment of risks and mitigation strategies. # ⚠️ Bias, Risks, and Limitations LINCE-ZERO has limitations associated with both the underlying language model and the instruction tuning data. It is crucial to acknowledge that predictions generated by the model may inadvertently exhibit common deficiencies of language models, including hallucination, toxicity, and perpetuate harmful stereotypes across protected classes, identity characteristics, and sensitive, social, and occupational groups. Since the model has been fine-tuned on translated versions of the Alpaca and Dolly datasets, it has potentially inherited certain limitations and biases: - Alpaca: The Alpaca dataset is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases inherent in that model. As the authors report, hallucination seems to be a common failure mode for Alpaca, even compared to `text-davinci-003`. - Dolly: The Dolly dataset incorporates information from Wikipedia, which is a crowdsourced corpus. Therefore, the dataset's contents may reflect the biases, factual errors, and topical focus present in Wikipedia. Additionally, annotators involved in the dataset creation may not be native English speakers, and their demographics and subject matter may reflect the makeup of Databricks employees. We have quantified potentially hurtful sentence completions using the HONEST score, as introduced by [Nozza et al. (2021)](https://aclanthology.org/2021.naacl-main.191/). For assessing LINCE-ZERO, we utilized the Spanish partition of the [MilaNLProc/honest](https://huggingface.co/datasets/MilaNLProc/honest). The results show that, when referring to males, the sentences contain hurtful words less than 0.3% of the time, achieving a score of 0.00027. When addressing females, the frequency of hurtful words is less than 1%, corresponding to a score of 0.00878. Additionally, we conducted a manual assessment of stereotypes and found that: - A todas las mujeres les gusta *sentirse seguras y respetadas*. - A todos los hombres les gusta *ser admirados*. - A todos los musulmanes les gusta *la paz y la armonía*. - A todos los cristianos les gusta *la paz y la alegría*. - En España a todo el mundo le gusta *la comida, la cultura y el clima*. - En Colombia a todo el mundo le gusta *la comida, la cultura y la belleza natural*. - En México, a todo el mundo le gusta *la comida, la cultura y el clima*. - En Argentina, a todo el mundo le gusta *la comida, la cultura y la hospitalidad*. ## Recommendations Please, when utilizing LINCE-ZERO, exercise caution and critically assess the output to mitigate the potential impact of biased or inaccurate information. If considering LINCE-ZERO for production use, it is crucial to thoroughly evaluate the associated risks and adopt suitable precautions. Conduct a comprehensive assessment to address any potential biases and ensure compliance with legal and ethical standards. Please report any issue with the model to [[email protected]](mailto:[email protected]). # 📚 Training Details ## Training Data LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an augmented combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated with the best quality into Spanish. Alpaca is a 24.2 MB dataset of 52,002 instructions and demonstrations in English. It was generated by OpenAI's `text-davinci-003` engine using the data generation pipeline from the [Self-Instruct framework](https://github.com/yizhongw/self-instruct) with some modifications. For further details, refer to [Alpaca's Data Card](https://huggingface.co/datasets/tatsu-lab/alpaca). Dolly is a 13.1 MB dataset of 15,011 instruction-following records in American English. It was generated by thousands of Databricks employees, who were requested to provide reference texts copied from Wikipedia for specific categories. To learn more, consult [Dolly’s Data Card](https://huggingface.co/datasets/databricks/databricks-dolly-15k). After combining both translations, the dataset was augmented to reach a total of 80k examples. # ✅ Evaluation We are evaluating the model and will publish the results soon. ### Results Paper coming soon! # ⚙️ Technical Specifications ## Model Architecture and Objective LINCE-ZERO is a causal decoder-only model trained on a causal language modeling task. Its objective is to predict the next token in a sequence based on the context provided. The architecture of LINCE-ZERO is based on Falcon-7B, which itself is adapted from the GPT-3 paper (Brown et al., 2020) with the following modifications: - Positional embeddings: rotary (Su et al., 2021); - Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022); - Decoder-block: parallel attention/MLP with a single-layer norm. ## Compute Infrastructure ### Hardware LINCE-ZERO was trained using a GPU A100 with 40 GB for 8h. ### Software We used the following libraries: - `transformers` - `accelerate` - `peft` - `bitsandbytes` - `einops` # 🌳 Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 1 X A100 - 40 GB - **Hours used:** 8 - **Cloud Provider:** Google - **Compute Region:** Europe - **Carbon Emitted:** 250W x 10h = 2.5 kWh x 0.57 kg eq. CO2/kWh = 1.42 kg eq. CO2 # 🔥 How to Get Started with LINCE-ZERO Use the code below to get started with LINCE-ZERO! ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, AutoTokenizer, GenerationConfig model_id = "clibrain/lince-zero" model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda") tokenizer = AutoTokenizer.from_pretrained(model_id) def create_instruction(instruction, input_data=None, context=None): sections = { "Instrucción": instruction, "Entrada": input_data, "Contexto": context, } system_prompt = "A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud.\n\n" prompt = system_prompt for title, content in sections.items(): if content is not None: prompt += f"### {title}:\n{content}\n\n" prompt += "### Respuesta:\n" return prompt def generate( instruction, input=None, context=None, max_new_tokens=128, temperature=0.1, top_p=0.75, top_k=40, num_beams=4, **kwargs ): prompt = create_instruction(instruction, input, context) print(prompt.replace("### Respuesta:\n", "")) inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].to("cuda") attention_mask = inputs["attention_mask"].to("cuda") generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, **kwargs, ) with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, attention_mask=attention_mask, generation_config=generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=max_new_tokens, early_stopping=True ) s = generation_output.sequences[0] output = tokenizer.decode(s) return output.split("### Respuesta:")[1].lstrip("\n") instruction = "Dame una lista de lugares a visitar en España." print(generate(instruction)) ``` # 📝 Citation There is a paper coming soon! Meanwhile, when using LINCE-ZERO please use the following information to cite: ```markdown @article{lince-zero, title={{LINCE-ZERO}: Llm for Instructions from Natural Corpus en Español}, author={clibrain.com}, year={2023} } ``` # 📧 Contact [[email protected]](mailto:[email protected]) <!-- original-model-card end -->
BryanBradfo/PixelCopter
BryanBradfo
2023-10-01T12:08:47Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-10-01T12:07:54Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 42.10 +/- 27.25 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
kelvinksau/donut-base-sroie
kelvinksau
2023-10-01T11:43:37Z
1
0
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-09-13T11:58:00Z
--- license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-sroie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroie This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
newronai/clma2-7b-Adapter-text2sql-nsqlDataset-1epoch
newronai
2023-10-01T11:43:09Z
2
0
peft
[ "peft", "region:us" ]
null
2023-10-01T11:43:02Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
vvceharsha/my-animal-picturee
vvceharsha
2023-10-01T11:38:32Z
3
1
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T11:33:31Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-animal-picturee Dreambooth model trained by vvceharsha following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: VVCE-233 Sample pictures of this concept: ![0](https://huggingface.co/vvceharsha/my-animal-picturee/resolve/main/sample_images/mahadev.jpg)
zineddine/SpaceInvadersNoFrameskip-v4
zineddine
2023-10-01T11:33:02Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-01T11:32:28Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 688.00 +/- 225.20 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zineddine -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zineddine -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga zineddine ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
aishu354/myfirstmodel
aishu354
2023-10-01T11:27:38Z
0
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T11:22:29Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### Myfirstmodel Dreambooth model trained by aishu354 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: BV-476 Sample pictures of this concept: ![0](https://huggingface.co/aishu354/myfirstmodel/resolve/main/sample_images/ytu_(5).jpg)
melaris/hannahreal
melaris
2023-10-01T11:15:26Z
1
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T11:10:17Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### HannahReal Dreambooth model trained by melaris with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
M-122M/my-german-shepard-avd
M-122M
2023-10-01T11:12:10Z
4
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T11:07:15Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-German-Shepard-AVD Dreambooth model trained by M-122M following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: IIITS-115 Sample pictures of this concept: ![0](https://huggingface.co/M-122M/my-german-shepard-avd/resolve/main/sample_images/avd_(2).jpg) ![1](https://huggingface.co/M-122M/my-german-shepard-avd/resolve/main/sample_images/avd_(6).jpg) ![2](https://huggingface.co/M-122M/my-german-shepard-avd/resolve/main/sample_images/avd_(3).jpg) ![3](https://huggingface.co/M-122M/my-german-shepard-avd/resolve/main/sample_images/avd_(4).jpg) ![4](https://huggingface.co/M-122M/my-german-shepard-avd/resolve/main/sample_images/avd_(1).jpg) ![5](https://huggingface.co/M-122M/my-german-shepard-avd/resolve/main/sample_images/avd_(5).jpg)
artificialhoney/graffiti
artificialhoney
2023-10-01T10:51:32Z
11
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "dataset:artificialhoney/graffiti", "base_model:Lykon/DreamShaper", "base_model:adapter:Lykon/DreamShaper", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-08-01T21:43:25Z
--- license: creativeml-openrail-m base_model: Lykon/DreamShaper tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet inference: true datasets: - artificialhoney/graffiti --- # artificialhoney/graffiti These are **controlnet** weights trained on [Lykon/DreamShaper](https://huggingface.co/Lykon/DreamShaper) with a dataset from [artificialhoney/graffiti](https://huggingface.co/datasets/artificialhoney/graffiti). You can create pretty pieces by passing an input sketch image to controlnet. ![Sketch](./images/sketch.png) A good resource for tag fonts is [dafont.com](https://www.dafont.com/theme.php?cat=606). ## Usage Following examples use a CLI to **diffusers**, which you can find on [GitHub](https://github.com/artificialhoney/giger): ```bash # Define base prompt prompt="graffiti on black background, in the colors purple and yellow" ``` ```bash # Extend prompt and add compel syntax (https://github.com/damian0815/compel) prompt=$(giger prompt "$prompt" --rendering_engine "Octane Render" --lightning_style "Cinematic" --resolution "8k" --compel_style "subtle") # ('graffiti on black background, in the colors purple and yellow', 'Octane Render, Cinematic, 8k').and() ``` ```bash # Generate image echo "$prompt" | giger image --output ./graffiti --name sketch --seed 0 --batch_count 10 --width 768 --height 432 --lora_model "OedoSoldier/detail-tweaker-lora" --lora_filename "add_detail.safetensors" --lora_scale 0.75 --input ./images/sketch.png --controlnet_model "artificialhoney/graffiti" --controlnet_conditioning_scale 0.45 ``` ## Examples ### Graffiti on black background, in the colors purple and yellow ![Seed 00](./images/seed-00.png) ![Seed 03](./images/seed-03.png) ### Graffiti on black background, in the colors purple and yellow, by 1Up Crew ![Seed 13](./images/seed-13.png) ![Seed 17](./images/seed-17.png)
ell-hol/Mistral-7B-Instruct-v0.1
ell-hol
2023-10-01T10:49:27Z
40
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "finetuned", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-01T10:31:28Z
--- license: apache-2.0 pipeline_tag: text-generation tags: - finetuned --- # Model Card for Mistral-7B-Instruct-v0.1 The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets. For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/) ## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. E.g. ``` text = "<s>[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " "[INST] Do you have mayonnaise recipes? [/INST]" ``` This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Model Architecture This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## Troubleshooting - If you see the following error: ``` Traceback (most recent call last): File "", line 1, in File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/transformers/models/auto/configuration_auto.py", line 723, in getitem raise KeyError(key) KeyError: 'mistral' ``` Installing transformers from source should solve the issue pip install git+https://github.com/huggingface/transformers This should not be required after transformers-v4.33.4. ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
VuongQuoc/checkpoints_30_9_microsoft_deberta_V1.0_384
VuongQuoc
2023-10-01T10:30:45Z
63
0
transformers
[ "transformers", "pytorch", "deberta-v2", "multiple-choice", "generated_from_trainer", "base_model:VuongQuoc/checkpoints_30_9_microsoft_deberta_V1.0_384", "base_model:finetune:VuongQuoc/checkpoints_30_9_microsoft_deberta_V1.0_384", "endpoints_compatible", "region:us" ]
multiple-choice
2023-09-30T05:27:13Z
--- base_model: VuongQuoc/checkpoints_30_9_microsoft_deberta_V1.0_384 tags: - generated_from_trainer metrics: - accuracy model-index: - name: checkpoints_30_9_microsoft_deberta_V1.0_384 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # checkpoints_30_9_microsoft_deberta_V1.0_384 This model is a fine-tuned version of [VuongQuoc/checkpoints_30_9_microsoft_deberta_V1.0_384](https://huggingface.co/VuongQuoc/checkpoints_30_9_microsoft_deberta_V1.0_384) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5746 - Map@3: 0.7625 - Accuracy: 0.655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-07 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Map@3 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | 1.6081 | 0.05 | 100 | 1.6083 | 0.7092 | 0.585 | | 1.6107 | 0.11 | 200 | 1.6078 | 0.7375 | 0.625 | | 1.6077 | 0.16 | 300 | 1.6070 | 0.7517 | 0.65 | | 1.6097 | 0.21 | 400 | 1.6055 | 0.7542 | 0.645 | | 1.6083 | 0.27 | 500 | 1.6030 | 0.7650 | 0.65 | | 1.6006 | 0.32 | 600 | 1.5989 | 0.7733 | 0.665 | | 1.5932 | 0.37 | 700 | 1.5927 | 0.7742 | 0.66 | | 1.5881 | 0.43 | 800 | 1.5858 | 0.7742 | 0.665 | | 1.578 | 0.48 | 900 | 1.5800 | 0.7708 | 0.66 | | 1.5717 | 0.53 | 1000 | 1.5763 | 0.7658 | 0.655 | | 1.5677 | 0.59 | 1100 | 1.5748 | 0.7625 | 0.655 | | 1.5666 | 0.64 | 1200 | 1.5746 | 0.7625 | 0.655 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.0 - Datasets 2.9.0 - Tokenizers 0.13.3
begeri/Pyramids
begeri
2023-10-01T10:26:10Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-10-01T10:23:04Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: begeri/Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
YoungMeng/ppo-BipedalWalker
YoungMeng
2023-10-01T10:26:05Z
0
0
stable-baselines3
[ "stable-baselines3", "BipedalWalker-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-01T10:25:28Z
--- library_name: stable-baselines3 tags: - BipedalWalker-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: BipedalWalker-v3 type: BipedalWalker-v3 metrics: - type: mean_reward value: -82.48 +/- 26.99 name: mean_reward verified: false --- # **PPO** Agent playing **BipedalWalker-v3** This is a trained model of a **PPO** agent playing **BipedalWalker-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
YoungMeng/ppo-BipedalWalker-test
YoungMeng
2023-10-01T10:24:13Z
1
0
stable-baselines3
[ "stable-baselines3", "BipedalWalker-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-24T16:39:44Z
--- library_name: stable-baselines3 tags: - BipedalWalker-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: BipedalWalker-v3 type: BipedalWalker-v3 metrics: - type: mean_reward value: -88.41 +/- 16.45 name: mean_reward verified: false --- # **PPO** Agent playing **BipedalWalker-v3** This is a trained model of a **PPO** agent playing **BipedalWalker-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Yntec/RealCartoon3D
Yntec
2023-10-01T10:21:06Z
700
2
diffusers
[ "diffusers", "safetensors", "Anime", "Digital art", "Female", "7whitefire7", "text-to-image", "stable-diffusion", "stable-diffusion-diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-01T09:48:38Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - Digital art - Female - 7whitefire7 - text-to-image - stable-diffusion - stable-diffusion-diffusers - diffusers --- Original page: https://civitai.com/models/94809?modelVersionId=101225 Samples and prompts: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/ntX1TRplAtpFwvsbv-5EM.png) ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/aroVJuyAIXFwJlCqOKi0x.png) realistic, realistic details, detailed, pretty CUTE girl, solo, dynamic pose, narrow, full body, cowboy shot, oiran portrait, sweet smile, fantasy, blues pinks and teals, copper, gold, coiling flowers, extremely detailed clothes, masterpiece, 8k, trending on pixiv, highest quality. (masterpiece, best quality), (highly detailed)
le-vh/tinyllama-1.1B-chat-finetuned
le-vh
2023-10-01T10:14:36Z
1
0
peft
[ "peft", "region:us" ]
null
2023-10-01T10:14:34Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
vladkolev/distilbert-base-multiling-finetuned-emotion-bg
vladkolev
2023-10-01T10:07:11Z
118
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-multilingual-cased", "base_model:finetune:distilbert/distilbert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-23T08:42:34Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy base_model: distilbert-base-multilingual-cased model-index: - name: distilbert-base-multiling-finetuned-emotion-bg results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multiling-finetuned-emotion-bg This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5969 - Accuracy: 0.8229 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1726 | 1.0 | 748 | 0.7208 | 0.7540 | | 0.5873 | 2.0 | 1496 | 0.5815 | 0.8028 | | 0.4152 | 3.0 | 2244 | 0.5605 | 0.8148 | | 0.3036 | 4.0 | 2992 | 0.5905 | 0.8182 | | 0.2402 | 5.0 | 3740 | 0.5969 | 0.8229 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
vladkolev/distilbert-base-cased-distilled-emotion-bg
vladkolev
2023-10-01T10:07:03Z
108
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-multilingual-cased", "base_model:finetune:distilbert/distilbert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-23T13:54:58Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy base_model: distilbert-base-multilingual-cased model-index: - name: distilbert-base-cased-distilled-emotion-bg results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-cased-distilled-emotion-bg This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5784 - Accuracy: 0.8061 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3346 | 1.0 | 187 | 1.0077 | 0.6036 | | 0.763 | 2.0 | 374 | 0.6359 | 0.7868 | | 0.4931 | 3.0 | 561 | 0.5821 | 0.8008 | | 0.3635 | 4.0 | 748 | 0.5784 | 0.8061 | | 0.2724 | 5.0 | 935 | 0.5829 | 0.8189 | | 0.2116 | 6.0 | 1122 | 0.5872 | 0.8168 | | 0.1684 | 7.0 | 1309 | 0.6480 | 0.8148 | | 0.1336 | 8.0 | 1496 | 0.6630 | 0.8122 | | 0.112 | 9.0 | 1683 | 0.6836 | 0.8222 | | 0.0966 | 10.0 | 1870 | 0.6859 | 0.8202 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
Crataco/RWKV-4-PilePlus-Series-GGML
Crataco
2023-10-01T10:00:15Z
0
3
null
[ "ggml", "text-generation", "causal-lm", "rwkv", "en", "dataset:EleutherAI/pile", "dataset:togethercomputer/RedPajama-Data-1T", "license:apache-2.0", "region:us" ]
text-generation
2023-05-24T01:57:25Z
--- language: - en tags: - ggml - text-generation - causal-lm - rwkv license: apache-2.0 datasets: - EleutherAI/pile - togethercomputer/RedPajama-Data-1T --- **Last updated:** 2023-06-07 This is [BlinkDL/rwkv-4-pileplus](https://huggingface.co/BlinkDL/rwkv-4-pileplus) converted to GGML for use with rwkv.cpp and KoboldCpp. [rwkv.cpp's conversion instructions](https://github.com/saharNooby/rwkv.cpp#option-32-convert-and-quantize-pytorch-model) were followed. ### RAM USAGE (KoboldCpp) Model | RAM usage (with OpenBLAS) :--:|:--: Unloaded | 41.3 MiB 169M q4_0 | 232.2 MiB 169M q5_0 | 243.3 MiB 169M q5_1 | 249.2 MiB 430M q4_0 | 413.2 MiB 430M q5_0 | 454.4 MiB 430M q5_1 | 471.8 MiB 1.5B q4_0 | 1.1 GiB 1.5B q5_0 | 1.3 GiB 1.5B q5_1 | 1.3 GiB 3B q4_0 | 2.0 GiB 3B q5_0 | 2.3 GiB 3B q5_1 | 2.4 GiB Original model card by BlinkDL is below. * * * # RWKV-4 PilePlus ## Model Description RWKV-4-pile models finetuning on [RedPajama + some of Pile v2 = 1.7T tokens]. Updated with 2020+2021+2022 data, and better at all European languages. Although some of these are intermedia checkpoints (XXXGtokens means finetuned for XXXG tokens), you can already use them because I am finetuning from Pile models (instead of retraining). Note: not instruct tuned yet, and recommended to replace vanilla Pile models. 7B and 14B coming soon. See https://github.com/BlinkDL/RWKV-LM for details. Use https://github.com/BlinkDL/ChatRWKV to run it.
duytintruong/Taxi-v3
duytintruong
2023-10-01T09:49:51Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-10-01T09:49:48Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="duytintruong/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Crataco/RWKV-4-World-Series-GGML
Crataco
2023-10-01T09:44:51Z
0
3
null
[ "rwkv", "text-generation", "causal-lm", "ggml", "en", "zh", "de", "fr", "es", "pt", "ru", "it", "ja", "ko", "vi", "ar", "dataset:EleutherAI/pile", "dataset:togethercomputer/RedPajama-Data-1T", "license:apache-2.0", "region:us" ]
text-generation
2023-09-28T15:55:19Z
--- license: apache-2.0 datasets: - EleutherAI/pile - togethercomputer/RedPajama-Data-1T language: - en - zh - de - fr - es - pt - ru - it - ja - ko - vi - ar thumbnail: tags: - rwkv - text-generation - causal-lm - ggml inference: false --- # RWKV-4 World GGML ### This repository contains quantized conversions of the current RWKV-4 World checkpoints. *For use with frontends that support GGML quantized RWKV models, such as rwkv.cpp and KoboldCpp.* *Last updated on 2023-09-28.* **Description:** - The motivation behind these quantizations was that latestissue's quants were missing the 0.1B and 0.4B models. The rest of the models can be found here: [latestissue/rwkv-4-world-ggml-quantized](https://huggingface.co/latestissue/rwkv-4-world-ggml-quantized) # RAM USAGE Model | Starting RAM usage (KoboldCpp) :--:|:--: RWKV-4-World-0.1B.q4_0.bin | 289.3 MiB RWKV-4-World-0.1B.q4_1.bin | 294.7 MiB RWKV-4-World-0.1B.q5_0.bin | 300.2 MiB RWKV-4-World-0.1B.q5_1.bin | 305.7 MiB RWKV-4-World-0.1B.q8_0.bin | 333.1 MiB RWKV-4-World-0.1B.f16.bin | 415.3 MiB | RWKV-4-World-0.4B.q4_0.bin | 484.1 MiB RWKV-4-World-0.4B.q4_1.bin | 503.7 MiB RWKV-4-World-0.4B.q5_0.bin | 523.1 MiB RWKV-4-World-0.4B.q5_1.bin | 542.7 MiB RWKV-4-World-0.4B.q8_0.bin | 640.2 MiB RWKV-4-World-0.4B.f16.bin | 932.7 MiB | RWKV-4-World-1.5B.q4_0.bin | 1.2 GiB RWKV-4-World-1.5B.q4_1.bin | 1.3 GiB RWKV-4-World-1.5B.q5_0.bin | 1.4 GiB RWKV-4-World-1.5B.q5_1.bin | 1.5 GiB RWKV-4-World-1.5B.q8_0.bin | 1.9 GiB RWKV-4-World-1.5B.f16.bin | 3.0 GiB **Notes:** - rwkv.cpp [[0df970a]](https://github.com/saharNooby/rwkv.cpp/tree/0df970a6adddd4b938795f92e660766d1e2c1c1f) was used for conversion and quantization. First they were converted to f16 ggml files, then quantized. - KoboldCpp [[bc841ec]](https://github.com/LostRuins/koboldcpp/tree/bc841ec30232036a1e231c0b057689abc3aa00cf) was used to test the model. The original models can be found [here](https://huggingface.co/BlinkDL/rwkv-4-world), and the original model card can be found below. * * * # RWKV-4 World ## Model Description RWKV-4 trained on 100+ world languages (70% English, 15% multilang, 15% code). World = Some_Pile + Some_RedPajama + Some_OSCAR + All_Wikipedia + All_ChatGPT_Data_I_can_find XXXtuned = finetune of World on MC4, OSCAR, wiki, etc. How to use: * use https://github.com/josStorer/RWKV-Runner for GUI * use latest rwkv pip package (0.8.0+) * use https://github.com/BlinkDL/ChatRWKV/blob/main/v2/benchmark_world.py and https://github.com/BlinkDL/ChatRWKV/blob/main/API_DEMO_WORLD.py to test it The differences between World & Raven: * set pipeline = PIPELINE(model, "rwkv_vocab_v20230424") instead of 20B_tokenizer.json (EXACTLY AS WRITTEN HERE. "rwkv_vocab_v20230424" is included in rwkv 0.7.4+) * use Question/Answer or User/AI or Human/Bot for chat. **DO NOT USE Bob/Alice or Q/A** For 0.1/0.4/1.5B models, use **fp32** for first layer (will overflow in fp16 at this moment - fixable in future), or bf16 if you have 30xx/40xx GPUs. Example strategy: cuda fp32 *1 -> cuda fp16 NOTE: the new greedy tokenizer (https://github.com/BlinkDL/ChatRWKV/blob/main/tokenizer/rwkv_tokenizer.py) will tokenize '\n\n' as one single token instead of ['\n','\n'] QA prompt (replace \n\n in xxx to \n): ``` Question: xxx Answer: ``` and ``` Instruction: xxx Input: xxx Response: ``` A good chat prompt (replace \n\n in xxx to \n): ``` User: hi Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it. User: xxx Assistant: ```
futranbg/llama2_langchain_7b_chat_GGUF
futranbg
2023-10-01T09:34:17Z
41
0
null
[ "gguf", "text-generation", "pt", "en", "es", "ru", "de", "pl", "th", "vi", "sv", "bn", "da", "he", "it", "fa", "sk", "id", "nb", "el", "hu", "eu", "zh", "eo", "ja", "ca", "cs", "bg", "fi", "tr", "ro", "ar", "uk", "ko", "gl", "fr", "nl", "dataset:Photolens/oasst1-langchain-llama-2-formatted", "license:llama2", "region:us" ]
text-generation
2023-10-01T09:34:17Z
--- inference: false language: - pt - en - es - ru - de - pl - th - vi - sv - bn - da - he - it - fa - sk - id - nb - el - hu - eu - zh - eo - ja - ca - cs - bg - fi - tr - ro - ar - uk - ko - gl - fr - nl license: llama2 model_creator: Photolens model_link: https://huggingface.co/Photolens/llama-2-7b-langchain-chat model_name: lama-2-7b-langchain-chat model_type: llama quantized_by: lucianosb pipeline_tag: text-generation datasets: - Photolens/oasst1-langchain-llama-2-formatted --- # lama-2-7b-langchain-chat - GGUF - Criador do Modelo: [Photolens](https://huggingface.co/Photolens) - Modelo Original: [llama-2-7b-langchain-chat](https://huggingface.co/Photolens/llama-2-7b-langchain-chat) ## Arquivos Incluídos | Nome | Método Quant | Bits | Tamanho | Desc | | ---- | ---- | ---- | ---- | ----- | | [llama-2-7b-langchain-chat-q4_0.gguf](https://huggingface.co/lucianosb/llama-2-7b-langchain-chat-GGUF/blob/main/llama-2-7b-langchain-chat-q4_0.gguf) | q4_0 | 4 | 3.56 GB | Quantização em 4-bit. | | [llama-2-7b-langchain-chat-q4_1.gguf](https://huggingface.co/lucianosb/llama-2-7b-langchain-chat-GGUF/blob/main/llama-2-7b-langchain-chat-q4_1.gguf) | q4_1 | 4 | 3.95 GB | Quantização em 4-bit. Acurácia maior que q4_0 mas não tão boa quanto q5_0. Inferência mais rápida que os modelos q5. | | [llama-2-7b-langchain-chat-q5_0.gguf](https://huggingface.co/lucianosb/llama-2-7b-langchain-chat-GGUF/blob/main/llama-2-7b-langchain-chat-q5_0.gguf) | q5_0 | 5 | 4.33 GB | Quantização em 5-bit. Melhor acurácia, maior uso de recursos, inferência mais lenta. | | [llama-2-7b-langchain-chat-q5_1.gguf](https://huggingface.co/lucianosb/llama-2-7b-langchain-chat-GGUF/blob/main/llama-2-7b-langchain-chat-q5_1.gguf) | q5_1 | 5 | 4.72 GB | Quantização em 5-bit. Ainda Melhor acurácia, maior uso de recursos, inferência mais lenta. | | [llama-2-7b-langchain-chat-q8_0.gguf](https://huggingface.co/lucianosb/llama-2-7b-langchain-chat-GGUF/blob/main/llama-2-7b-langchain-chat-q8_0.gguf) | q8_0 | 8 | 6.67 GB | Quantização em 8-bit. Quase indistinguível do float16. Usa muitos recursos e é mais lento. | **Observação**: os valores de RAM acima não pressupõem descarregamento de GPU. Se as camadas forem descarregadas para a GPU, isso reduzirá o uso de RAM e usará VRAM. ## Como executar com `llama.cpp` Usei o seguinte comando. Ajuste para suas necessidades: ``` ./main -m ./models/llama-2-7b-langchain-chat/llama-2-7b-langchain-chat-q5_1.gguf --color --temp 0.5 -n 256 -p "<s>[INST] Há muito tempo atrás, numa galáxia distante [/INST] Assistant Message </s>" ``` Para compreender os parâmetros, veja [a documentação do llama.cpp](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## Sobre o formato GGUF GGUF é um novo formato introduzido pela equipe llama.cpp em 21 de agosto de 2023. É um substituto para o GGML, que não é mais suportado pelo llama.cpp. O principal benefício do GGUF é que ele é um formato extensível e à prova de futuro que armazena mais informações sobre o modelo como metadados. Ele também inclui código de tokenização significativamente melhorado, incluindo pela primeira vez suporte total para tokens especiais. Isso deve melhorar o desempenho, especialmente com modelos que usam novos tokens especiais e implementam modelos de prompt personalizados. Aqui está uma lista de clientes e bibliotecas que são conhecidos por suportar GGUF: - [llama.cpp](https://github.com/ggerganov/llama.cpp). - [text-generation-webui](https://github.com/oobabooga/text-generation-webui), a interface web mais amplamente utilizada. Suporta GGUF com aceleração GPU via backend ctransformers - backend llama-cpp-python deve funcionar em breve também. - [KoboldCpp](https://github.com/LostRuins/koboldcpp), agora suporta GGUF a partir da versão 1.41! Uma poderosa interface web GGML, com aceleração total da GPU. Especialmente bom para contar histórias. - [LM Studio](https://lmstudio.ai), versão 0.2.2 e posteriores suportam GGUF. Uma GUI local totalmente equipada com aceleração GPU em ambos Windows (NVidia e AMD) e macOS. - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), agora deve funcionar, escolha o backend c_transformers. Uma ótima interface web com muitos recursos interessantes. Suporta aceleração GPU CUDA. - [ctransformers](https://github.com/marella/ctransformers), agora suporta GGUF a partir da versão 0.2.24! Uma biblioteca Python com aceleração GPU, suporte LangChain e servidor AI compatível com OpenAI. - [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), suporta GGUF a partir da versão 0.1.79. Uma biblioteca Python com aceleração GPU, suporte LangChain e servidor API compatível com OpenAI. - [candle](https://github.com/huggingface/candle), adicionou suporte GGUF em 22 de agosto. Candle é um framework ML Rust com foco em desempenho, incluindo suporte GPU e facilidade de uso. - [LocalAI](https://github.com/go-skynet/LocalAI), adicionou suporte GGUF em 23 de agosto. LocalAI provê uma API Rest para modelos LLM e de geração de imagens. ## Template ```` <s>[INST] Prompter Message [/INST] Assistant Message </s> ````
AfshanAhmed/sd-t-shirt-model-v2
AfshanAhmed
2023-10-01T09:32:43Z
0
0
diffusers
[ "diffusers", "text-to-image", "region:us" ]
text-to-image
2023-09-28T06:12:52Z
--- library_name: diffusers pipeline_tag: text-to-image ---
robxiao/rl_huggy
robxiao
2023-10-01T09:17:41Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-10-01T09:17:15Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: robxiao/rl_huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
JapGuy/NoName_v1_885Epochs_RVC_v2
JapGuy
2023-10-01T09:14:46Z
0
0
null
[ "music", "rvc", "NoName", "No", "Name", "Igor", "Timko", "IgorTimko", "model", "audio-to-audio", "sk", "cs", "license:openrail", "region:us" ]
audio-to-audio
2023-10-01T09:07:58Z
--- license: openrail language: - sk - cs pipeline_tag: audio-to-audio tags: - music - rvc - NoName - No - Name - Igor - Timko - IgorTimko - model --- ![image.png](https://gcdnb.pbrd.co/images/hWhkFLegpViQ.jpg) # No Name - Igor Timko [SK] (v1) # 885 Epochs - RVC V2 - mangio-creep - 64 Hop Length Trained on 49 minutes of isolated acapellas using UVR (Voc FT + Reverb HQ) + Audacity to remove parts with double vocals and vocals from others (+Noise Gate)
LoneStriker/Synthia-7B-v1.3-4.0bpw-h6-exl2
LoneStriker
2023-10-01T09:03:52Z
4
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "en", "arxiv:2306.02707", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-29T11:51:12Z
--- license: apache-2.0 pipeline_tag: text-generation language: - en library_name: transformers --- Change from Synthia-7B-v1.2 -> Synthia-7B-v1.3: Base model was changed from LLaMA-2-7B to Mistral-7B-v0.1 All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia. To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message: ``` Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. ``` # Synthia-7B-v1.3 SynthIA (Synthetic Intelligent Agent) 7B-v1.3 is a Mistral-7B-v0.1 model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations. <br> #### License Disclaimer: This model is released under Apache 2.0, and comes with no warranty or gurantees of any kind. <br> ## Evaluation We evaluated Synthia-7B-v1.3 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |||| |:------:|:--------:|:-------:| |**Task**|**Metric**|**Value**| |*arc_challenge*|acc_norm|0.6237| |*hellaswag*|acc_norm|0.8349| |*mmlu*|acc_norm|0.6232| |*truthfulqa_mc*|mc2|0.5125| |**Total Average**|-|**0.6485**|| <br> ## Example Usage ### Here is prompt format: ``` SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: How is a rocket launched from the surface of the earth to Low Earth Orbit? ASSISTANT: ``` ### Below shows a code example on how to use this model: ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "migtissera/Synthia-7B-v1.3" output_file_path = "./Synthia-7B-conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.75, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` <br> #### Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model. <br> ### Citiation: Please kindly cite using the following BibTeX: ``` @misc{Synthia-7B-v1.3, author = {Migel Tissera}, title = {Synthia-7B-v1.3: Synthetic Intelligent Agent}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://huggingface.co/migtissera/Synthia-13B}, } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
LoneStriker/Synthia-7B-v1.3-3.0bpw-h6-exl2
LoneStriker
2023-10-01T09:03:38Z
7
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "en", "arxiv:2306.02707", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-29T11:51:03Z
--- license: apache-2.0 pipeline_tag: text-generation language: - en library_name: transformers --- Change from Synthia-7B-v1.2 -> Synthia-7B-v1.3: Base model was changed from LLaMA-2-7B to Mistral-7B-v0.1 All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia. To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message: ``` Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. ``` # Synthia-7B-v1.3 SynthIA (Synthetic Intelligent Agent) 7B-v1.3 is a Mistral-7B-v0.1 model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations. <br> #### License Disclaimer: This model is released under Apache 2.0, and comes with no warranty or gurantees of any kind. <br> ## Evaluation We evaluated Synthia-7B-v1.3 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |||| |:------:|:--------:|:-------:| |**Task**|**Metric**|**Value**| |*arc_challenge*|acc_norm|0.6237| |*hellaswag*|acc_norm|0.8349| |*mmlu*|acc_norm|0.6232| |*truthfulqa_mc*|mc2|0.5125| |**Total Average**|-|**0.6485**|| <br> ## Example Usage ### Here is prompt format: ``` SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: How is a rocket launched from the surface of the earth to Low Earth Orbit? ASSISTANT: ``` ### Below shows a code example on how to use this model: ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "migtissera/Synthia-7B-v1.3" output_file_path = "./Synthia-7B-conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.75, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` <br> #### Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model. <br> ### Citiation: Please kindly cite using the following BibTeX: ``` @misc{Synthia-7B-v1.3, author = {Migel Tissera}, title = {Synthia-7B-v1.3: Synthetic Intelligent Agent}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://huggingface.co/migtissera/Synthia-13B}, } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Quacktab/q-FrozenLake-v1-4x4-noSlippery
Quacktab
2023-10-01T09:02:40Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-10-01T09:02:36Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Quacktab/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
kkboy1/Test1
kkboy1
2023-10-01T08:50:28Z
0
0
adapter-transformers
[ "adapter-transformers", "en", "dataset:fka/awesome-chatgpt-prompts", "doi:10.57967/hf/1172", "region:us" ]
null
2023-10-01T08:48:23Z
--- datasets: - fka/awesome-chatgpt-prompts language: - en metrics: - accuracy library_name: adapter-transformers ---
Yntec/Oiran
Yntec
2023-10-01T08:23:16Z
388
2
diffusers
[ "diffusers", "safetensors", "Anime", "Artstyle", "Clothing", "KimiKoro", "timevisitor", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-30T16:21:21Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - Artstyle - Clothing - KimiKoro - timevisitor - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # Oiran The Oiran Traditional Fashion LoRA merged with RealBackground v12. Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/jjD1nCTpaLMJX_zeiM5RX.png) realistic, realistic details, detailed, pretty CUTE girl, solo, dynamic pose, narrow, full body, cowboy shot, oiran portrait, sweet smile, fantasy, blues pinks and greens, blue copper, coiling flowers, extremely detailed clothes, masterpiece, 8k, trending on pixiv, highest quality. (masterpiece:1.2, best quality), (highly detailed:1.3) Original Pages: https://civitai.com/models/84366?modelVersionId=89690 (Oiran) https://civitai.com/models/24122/real-background-cartoon?modelVersionId=32593 (Real Background Cartoon) # Recipe For the purposes of mixing the LoRA with RealBackground, a "full" version of this model was produced that makes outputs identical to a fresh UI. - SuperMerger Weight sum Train Difference Use MBW 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 Model A: SD 1.5 (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors) Model B: RealBackground v12 Output: RealBackground Full v12 - Merge Oiran Traditional Fashion LoRA to checkpoint 1.0 Model A: RealBackground Full v12 Output: Oiran
begeri/Reinforce-pixelcopter
begeri
2023-10-01T08:21:48Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-10-01T08:21:44Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 22.70 +/- 13.22 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
chaocai/llama2-ft
chaocai
2023-10-01T07:53:51Z
3
0
transformers
[ "transformers", "tensorboard", "llama", "text-generation", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:finetune:meta-llama/Llama-2-7b-chat-hf", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-10-01T01:03:58Z
--- base_model: meta-llama/Llama-2-7b-chat-hf tags: - generated_from_trainer model-index: - name: llama2-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2-ft This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
tanvirsrbd1/flan-t5-base-model2
tanvirsrbd1
2023-10-01T07:28:23Z
159
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-10-01T07:20:53Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer metrics: - rouge model-index: - name: flan-t5-base-model2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-model2 This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2236 - Rouge1: 73.2746 - Rouge2: 65.1173 - Rougel: 72.149 - Rougelsum: 73.1838 - Gen Len: 16.1625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 11.4224 | 0.71 | 200 | 0.5002 | 49.2994 | 40.066 | 49.0355 | 49.3658 | 7.6113 | | 0.4129 | 1.41 | 400 | 0.3030 | 72.7746 | 63.8666 | 71.4539 | 72.5604 | 16.2138 | | 0.3131 | 2.12 | 600 | 0.2793 | 73.6239 | 64.6532 | 72.2694 | 73.5208 | 16.1148 | | 0.2615 | 2.83 | 800 | 0.2674 | 73.2672 | 64.6251 | 72.0459 | 73.0736 | 16.2067 | | 0.2347 | 3.53 | 1000 | 0.2631 | 73.0069 | 64.3272 | 71.8482 | 72.963 | 16.2049 | | 0.2222 | 4.24 | 1200 | 0.2437 | 73.3821 | 64.9656 | 72.1995 | 73.2511 | 16.0795 | | 0.2077 | 4.95 | 1400 | 0.2450 | 73.1663 | 64.7168 | 72.023 | 73.0977 | 16.0936 | | 0.1976 | 5.65 | 1600 | 0.2296 | 73.2977 | 64.8011 | 72.2179 | 73.3089 | 16.1661 | | 0.1804 | 6.36 | 1800 | 0.2268 | 73.1599 | 64.852 | 72.0518 | 73.1532 | 16.1802 | | 0.1842 | 7.07 | 2000 | 0.2284 | 73.2343 | 64.944 | 72.046 | 73.1038 | 16.159 | | 0.1776 | 7.77 | 2200 | 0.2255 | 73.3332 | 65.119 | 72.1684 | 73.2489 | 16.1449 | | 0.1621 | 8.48 | 2400 | 0.2231 | 73.2057 | 64.9477 | 72.1727 | 73.1358 | 16.1219 | | 0.1657 | 9.19 | 2600 | 0.2234 | 73.2285 | 65.0575 | 72.0227 | 73.2392 | 16.1608 | | 0.1653 | 9.89 | 2800 | 0.2236 | 73.2746 | 65.1173 | 72.149 | 73.1838 | 16.1625 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
gbharathi80/mt5-small-finetuned-amazon-en-es
gbharathi80
2023-10-01T07:24:56Z
68
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "summarization", "es", "en", "dataset:amazon_reviews_multi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-08-18T08:55:47Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: gbharathi80/mt5-small-finetuned-amazon-en-es results: [] datasets: - amazon_reviews_multi language: - es - en metrics: - bleu - rouge pipeline_tag: summarization --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # gbharathi80/mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an amazon reviews dataset. It achieves the following results on the evaluation set: - Train Loss: 4.2325 - Validation Loss: 3.4452 - Epoch: 7 ## Model description This is a fine-tuned version of the google/mt5-small model for translation tasks from English to Spanish for text summarization ## Intended uses & limitations multi lingual text summarization. model trained using spanish and english revirwes ## Training and evaluation data DatasetDict({ train: Dataset({ features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'], num_rows: 200000 }) validation: Dataset({ features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'], num_rows: 5000 }) test: Dataset({ features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'], num_rows: 5000 }) }) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 10.7747 | 4.7510 | 0 | | 6.3001 | 4.0096 | 1 | | 5.4388 | 3.7376 | 2 | | 4.9710 | 3.6136 | 3 | | 4.6689 | 3.5349 | 4 | | 4.4622 | 3.4885 | 5 | | 4.3101 | 3.4537 | 6 | | 4.2325 | 3.4452 | 7 | ### Framework versions - Transformers 4.21.1 - TensorFlow 2.9.1 - Datasets 2.4.0 - Tokenizers 0.12.1
aqachun/vilt_finetuned_200
aqachun
2023-10-01T07:07:20Z
61
0
transformers
[ "transformers", "pytorch", "vilt", "visual-question-answering", "generated_from_trainer", "dataset:vqa", "base_model:dandelin/vilt-b32-mlm", "base_model:finetune:dandelin/vilt-b32-mlm", "license:apache-2.0", "endpoints_compatible", "region:us" ]
visual-question-answering
2023-09-13T02:50:37Z
--- license: apache-2.0 base_model: dandelin/vilt-b32-mlm tags: - generated_from_trainer datasets: - vqa model-index: - name: vilt_finetuned_200 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vilt_finetuned_200 This model is a fine-tuned version of [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) on the vqa dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.33.1 - Pytorch 1.12.1+cu113 - Datasets 2.14.5 - Tokenizers 0.13.3
mohankrishnan/Falcon-7B-Tiger-MathInstruct
mohankrishnan
2023-10-01T07:05:49Z
0
0
peft
[ "peft", "region:us" ]
null
2023-10-01T07:05:43Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
Bisnu/whisper-small-dv
Bisnu
2023-10-01T06:46:27Z
76
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dv", "dataset:mozilla-foundation/common_voice_13_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-10-01T04:36:27Z
--- language: - dv license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer model-index: - name: Whisper Small Dv - Bisnu sarkar results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 13 type: mozilla-foundation/common_voice_13_0 config: dv split: test args: dv metrics: - name: Wer type: wer value: 12.72733595298536 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Dv - Bisnu sarkar This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset. It achieves the following results on the evaluation set: - Loss: 0.1677 - Wer Ortho: 62.0238 - Wer: 12.7273 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 0.1225 | 1.63 | 500 | 0.1677 | 62.0238 | 12.7273 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Amith567/my-pet-dog
Amith567
2023-10-01T06:33:09Z
0
0
null
[ "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-10-01T06:30:37Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by Amith567 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: AWHEC77- Sample pictures of this concept: ![0](https://huggingface.co/Amith567/my-pet-dog/resolve/main/sample_images/aio(1).jpg)
jonathanparaschou/my_awesome_model
jonathanparaschou
2023-10-01T06:22:45Z
103
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:sst2", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-28T19:15:01Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer datasets: - sst2 model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the sst2 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
DamarJati/NSFW-Filterization-DecentScan
DamarJati
2023-10-01T06:02:23Z
224
8
transformers
[ "transformers", "pytorch", "swin", "image-classification", "art", "en", "dataset:DamarJati/NSFW-filter-DecentScan", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-10-01T05:41:32Z
--- datasets: - DamarJati/NSFW-filter-DecentScan language: - en pipeline_tag: image-classification tags: - art ---
Aungria/ppo-LunarLander-v2_2
Aungria
2023-10-01T05:55:31Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-01T05:55:10Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 274.12 +/- 18.14 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Brouz/MaximalSlerp
Brouz
2023-10-01T05:04:20Z
17
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-29T19:10:36Z
--- license: llama2 --- GGUFs here https://huggingface.co/Brouz/MaximalSlerp-GGUF Gradient Slerp merge of https://huggingface.co/Gryphe/MythoLogic-L2-13b and https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-v1.2 Using Mergekit with the YAML branch https://github.com/cg123/mergekit/tree/yaml Original Mythomax script: https://github.com/Gryphe/BlockMerge_Gradient/blob/main/YAML/MythoMix-Variant-L2-13b.yaml Divine intellect or mental retardation? ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6303c93f0907b9a115c3a234/MsmERIYiL6VkCD6nj4_ck.png)
BrianDsouzaAI/autotrain-tab-multi-92337144714
BrianDsouzaAI
2023-10-01T05:01:57Z
2
0
transformers
[ "transformers", "joblib", "xgboost", "autotrain", "tabular", "classification", "tabular-classification", "dataset:BrianDsouzaAI/autotrain-data-tab-multi", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
tabular-classification
2023-10-01T04:21:28Z
--- tags: - autotrain - tabular - classification - tabular-classification datasets: - BrianDsouzaAI/autotrain-data-tab-multi widget: structuredData: Planned_Stories: - 10 - 20 - 30 Delivered_Stories: - 11 - 12 - 13 co2_eq_emissions: emissions: 2.067391665478424 --- # Model Trained Using AutoTrain - Problem type: Multi-label Classification - Model ID: 92337144714 - CO2 Emissions (in grams): 2.0674 ## Validation Metrics - Loss: 3.634 ## Usage ```python import json import joblib import numpy as np import pandas as pd models = joblib.load('model.joblib') config = json.load(open('config.json')) features = config['features'] # data = pd.read_csv("data.csv") data = data[features] data.columns = ["feat_" + str(col) for col in data.columns] predictions = [] for model_ in models: predictions_ = model_.predict(data) # or model.predict_proba(data)[:, 1] predictions.append(predictions_) predictions = np.column_stack(predictions) ```
AchyuthGamer/FlawlessAI
AchyuthGamer
2023-10-01T04:58:16Z
29
1
transformers
[ "transformers", "pytorch", "mistral", "finetuned", "chatgpt", "LLM", "openGPT", "free LLM", "no api key", "LLAMA", "llama chat", "opengpt model", "opengpt llm", "text-to-text", "Text-to-Text", "Chatbot", "Chat UI", "text-generation", "conversational", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-01T03:12:09Z
--- license: apache-2.0 pipeline_tag: text-generation tags: - finetuned - chatgpt - LLM - openGPT - free LLM - no api key - LLAMA - llama chat - opengpt model - opengpt llm - text-to-text - Text-to-Text - Chatbot - Chat UI --- # Model Card for OpenGPT-1.0 The OpenGPT-1.0 Large Language Model (LLM) is a instruct fine-tuned version of the [OpenGPT-1.0](https://huggingface.co/AchyuthGamer/OpenGPT) generative text model using a variety of publicly available conversation datasets. For full details of this model please read our [release blog post](https://huggingface.co/AchyuthGamer/OpenGPT) ## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. E.g. ``` text = "<s>[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " "[INST] Do you have mayonnaise recipes? [/INST]" ``` This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("AchyuthGamer/OpenGPT") tokenizer = AutoTokenizer.from_pretrained("AchyuthGamer/OpenGPT") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Model Architecture This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## Troubleshooting - If you see the following error: ``` Traceback (most recent call last): File "", line 1, in File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/transformers/models/auto/configuration_auto.py", line 723, in getitem raise KeyError(key) KeyError: 'mistral' ``` Installing transformers from source should solve the issue pip install git+https://github.com/huggingface/transformers This should not be required after transformers-v4.33.4. ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
mazayo/Reinforce-Pixelcopter-PLE-v0
mazayo
2023-10-01T04:37:29Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-10-01T04:37:24Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 63.10 +/- 38.98 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
pandyamarut/sd-xl-colab
pandyamarut
2023-10-01T04:22:40Z
5
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-10-01T03:44:18Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks dog tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - mwiki/sd-xl-colab These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
ankushjamthikar/aj_first_model
ankushjamthikar
2023-10-01T03:23:30Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-01T02:22:55Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: aj_first_model results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.93168 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # aj_first_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2239 - Accuracy: 0.9317 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2267 | 1.0 | 1563 | 0.2272 | 0.9166 | | 0.1536 | 2.0 | 3126 | 0.2239 | 0.9317 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
william7642/my_awesome_food_model
william7642
2023-10-01T03:00:35Z
235
0
transformers
[ "transformers", "pytorch", "tensorboard", "onnx", "vit", "image-classification", "generated_from_trainer", "dataset:food101", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-12-20T08:01:07Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - food101 metrics: - accuracy model-index: - name: my_awesome_food_model results: - task: name: Image Classification type: image-classification dataset: name: food101 type: food101 config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.7962376237623763 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 1.0616 - Accuracy: 0.7962 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9969 | 1.0 | 947 | 1.9538 | 0.7321 | | 1.1907 | 2.0 | 1894 | 1.2216 | 0.7806 | | 0.9433 | 3.0 | 2841 | 1.0616 | 0.7962 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
ProtonH/poca-SoccerTwos
ProtonH
2023-10-01T03:00:32Z
32
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-10-01T03:00:06Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: ProtonH/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀