modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-15 00:43:56
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
521 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-15 00:40:56
card
stringlengths
11
1.01M
lmqg/mt5-small-zhquad-qag
lmqg
2023-11-10T16:21:15Z
4
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "questions and answers generation", "zh", "dataset:lmqg/qag_zhquad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-10T15:53:12Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: zh datasets: - lmqg/qag_zhquad pipeline_tag: text2text-generation tags: - questions and answers generation widget: - text: "南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近 南安普敦中央 火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。" example_title: "Questions & Answers Generation Example 1" model-index: - name: lmqg/mt5-small-zhquad-qag results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qag_zhquad type: default args: default metrics: - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) type: qa_aligned_f1_score_bertscore_question_answer_generation value: 75.47 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) type: qa_aligned_recall_bertscore_question_answer_generation value: 75.41 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) type: qa_aligned_precision_bertscore_question_answer_generation value: 75.56 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) type: qa_aligned_f1_score_moverscore_question_answer_generation value: 52.42 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) type: qa_aligned_recall_moverscore_question_answer_generation value: 52.33 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) type: qa_aligned_precision_moverscore_question_answer_generation value: 52.53 --- # Model Card of `lmqg/mt5-small-zhquad-qag` This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question & answer pair generation task on the [lmqg/qag_zhquad](https://huggingface.co/datasets/lmqg/qag_zhquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small) - **Language:** zh - **Training data:** [lmqg/qag_zhquad](https://huggingface.co/datasets/lmqg/qag_zhquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="zh", model="lmqg/mt5-small-zhquad-qag") # model prediction question_answer_pairs = model.generate_qa("南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近南安普敦中央火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-small-zhquad-qag") output = pipe("南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近 南安普敦中央 火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。") ``` ## Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-zhquad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_zhquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-------------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 75.47 | default | [lmqg/qag_zhquad](https://huggingface.co/datasets/lmqg/qag_zhquad) | | QAAlignedF1Score (MoverScore) | 52.42 | default | [lmqg/qag_zhquad](https://huggingface.co/datasets/lmqg/qag_zhquad) | | QAAlignedPrecision (BERTScore) | 75.56 | default | [lmqg/qag_zhquad](https://huggingface.co/datasets/lmqg/qag_zhquad) | | QAAlignedPrecision (MoverScore) | 52.53 | default | [lmqg/qag_zhquad](https://huggingface.co/datasets/lmqg/qag_zhquad) | | QAAlignedRecall (BERTScore) | 75.41 | default | [lmqg/qag_zhquad](https://huggingface.co/datasets/lmqg/qag_zhquad) | | QAAlignedRecall (MoverScore) | 52.33 | default | [lmqg/qag_zhquad](https://huggingface.co/datasets/lmqg/qag_zhquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_zhquad - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: None - model: google/mt5-small - max_length: 512 - max_length_output: 256 - epoch: 12 - batch: 8 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-zhquad-qag/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
panverz/translate
panverz
2023-11-10T16:12:45Z
0
0
null
[ "region:us" ]
null
2023-11-10T16:11:42Z
from googletrans import LANGUAGES, Translator import deepl # Initialize Google Translate translator = Translator() # Initialize DeepL deepl_api_key = 'YOUR_DEEPL_API_KEY' deepl_client = deepl.Client(deepl_api_key) # Text to translate text = 'Hello, how are you?' # Translate using Google Translate google_translations = {} for lang_code, lang_name in LANGUAGES.items(): translation = translator.translate(text, dest=lang_code).text google_translations[lang_name] = translation print('Google Translate:', google_translations) # Translate using DeepL deepl_translations = {} for lang_code, lang_name in LANGUAGES.items(): translation = deepl_client.translate(text, target_lang=lang_code)['translations'][0]['text'] deepl_translations[lang_name] = translation print('DeepL:', deepl_translations)
matekadlicsko/opus-mt-tc-big-hu-en-finetuned-telex-news
matekadlicsko
2023-11-10T16:12:36Z
5
1
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "translation", "hu", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-11-10T15:26:58Z
--- license: apache-2.0 language: - hu - en pipeline_tag: translation --- ## Hungarian to English Translation Model (Fine-Tuned) **Language Pair:** - Source Language: Hungarian - Target Language: English **Model Description:** This model is a fine-tuned version based on [opus-mt-tc-big-hu-en](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-hu-en) for Hungarian to English translation. The fine-tuning process utilized articles from [telex](https://telex.hu/), enhancing the model's performance. **License:** The model is released under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0), which is a permissive open-source license. It allows users to use, modify, and distribute the software for any purpose, with additional provisions related to patents. **Note:** Users are encouraged to review and comply with the terms of the Apache License 2.0 when using or contributing to this model. Additionally, citation and attribution to the original models and data sources are appreciated. ---
Nunofofo/rr
Nunofofo
2023-11-10T16:08:50Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-11-10T16:07:57Z
--- license: openrail license_name: open license_link: LICENSE ---
Danielbrdz/CodeBarcenas-1b
Danielbrdz
2023-11-10T16:07:18Z
1,500
0
transformers
[ "transformers", "pytorch", "gpt_bigcode", "text-generation", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-10T15:42:39Z
--- license: llama2 --- CodeBarcenas Model specialized in the Python language Based on the model: WizardLM/WizardCoder-1B-V1.0 And trained with the dataset: mlabonne/Evol-Instruct-Python-1k Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽
skroed/musicgen-medium
skroed
2023-11-10T15:57:22Z
3
0
transformers
[ "transformers", "pytorch", "musicgen", "text-to-audio", "arxiv:2306.05284", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-audio
2023-10-28T18:48:45Z
--- inference: true tags: - musicgen license: cc-by-nc-4.0 pipeline_tag: text-to-audio widget: - text: a funky house with 80s hip hop vibes example_title: Prompt 1 - text: a chill song with influences from lofi, chillstep and downtempo example_title: Prompt 2 - text: a catchy beat for a podcast intro example_title: Prompt 3 --- # MusicGen - Medium - 1.5B MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts. It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz. Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass. By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio. MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*. Four checkpoints are released: - [small](https://huggingface.co/facebook/musicgen-small) - [**medium** (this checkpoint)](https://huggingface.co/facebook/musicgen-medium) - [large](https://huggingface.co/facebook/musicgen-large) - [melody](https://huggingface.co/facebook/musicgen-melody) ## Example Try out MusicGen yourself! * Audiocraft Colab: <a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Colab: <a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Demo: <a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen"> <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/> </a> ## 🤗 Transformers Usage You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards. 1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy: ``` pip install --upgrade pip pip install --upgrade transformers scipy ``` 2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code! ```python from transformers import pipeline import scipy synthesiser = pipeline("text-to-audio", "facebook/musicgen-medium") music = synthesiser("lo-fi music with a soothing melody", forward_params={"do_sample": True}) scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], music=audio["audio"]) ``` 3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control. ```python from transformers import AutoProcessor, MusicgenForConditionalGeneration processor = AutoProcessor.from_pretrained("facebook/musicgen-medium") model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-medium") inputs = processor( text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"], padding=True, return_tensors="pt", ) audio_values = model.generate(**inputs, max_new_tokens=256) ``` 3. Listen to the audio samples either in an ipynb notebook: ```python from IPython.display import Audio sampling_rate = model.config.audio_encoder.sampling_rate Audio(audio_values[0].numpy(), rate=sampling_rate) ``` Or save them as a `.wav` file using a third-party library, e.g. `scipy`: ```python import scipy sampling_rate = model.config.audio_encoder.sampling_rate scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy()) ``` For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen). ## Audiocraft Usage You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft): 1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft) ``` pip install git+https://github.com/facebookresearch/audiocraft.git ``` 2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed: ``` apt get install ffmpeg ``` 3. Run the following Python code: ```py from audiocraft.models import MusicGen from audiocraft.data.audio import audio_write model = MusicGen.get_pretrained("medium") model.set_generation_params(duration=8) # generate 8 seconds. descriptions = ["happy rock", "energetic EDM"] wav = model.generate(descriptions) # generates 2 samples. for idx, one_wav in enumerate(wav): # Will save under {idx}.wav, with loudness normalization at -14 db LUFS. audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness") ``` ## Model details **Organization developing the model:** The FAIR team of Meta AI. **Model date:** MusicGen was trained between April 2023 and May 2023. **Model version:** This is the version 1 of the model. **Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation. **Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284). **Citation details:** ``` @misc{copet2023simple, title={Simple and Controllable Music Generation}, author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez}, year={2023}, eprint={2306.05284}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` **License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0. **Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue. ## Intended use **Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including: - Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science - Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs **Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. **Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. ## Metrics **Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark: - Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish) - Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST) - CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: - Overall quality of the music samples; - Text relevance to the provided text input; - Adherence to the melody for melody-guided music generation. More details on performance measures and human studies can be found in the paper. **Decision thresholds:** Not applicable. ## Evaluation datasets The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set. ## Training datasets The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing. ## Evaluation results Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper. | Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity | |---|---|---|---|---| | facebook/musicgen-small | 4.88 | 1.42 | 0.27 | - | | **facebook/musicgen-medium** | 5.14 | 1.38 | 0.28 | - | | facebook/musicgen-large | 5.48 | 1.37 | 0.28 | - | | facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 | More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section. ## Limitations and biases **Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model. **Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). **Limitations:** - The model is not able to generate realistic vocals. - The model has been trained with English descriptions and will not perform as well in other languages. - The model does not perform equally well for all music styles and cultures. - The model sometimes generates end of songs, collapsing to silence. - It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results. **Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive. **Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. **Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
Nostradamused/Taxi-v3
Nostradamused
2023-11-10T15:55:08Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-11-10T15:55:06Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Nostradamused/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
GraydientPlatformAPI/gasm-z
GraydientPlatformAPI
2023-11-10T15:51:21Z
3
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:openrail", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-10T15:46:07Z
--- license: openrail library_name: diffusers pipeline_tag: text-to-image ---
PsiPi/lmsys_vicuna-13b-v1.5-16k-exl2-3.0bpw
PsiPi
2023-11-10T15:39:56Z
6
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2307.09288", "arxiv:2306.05685", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-10T14:43:26Z
--- inference: false license: llama2 --- # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture - **License:** Llama 2 Community License Agreement - **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288) ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model - Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights - APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api ## Training Details Vicuna v1.5 (16k) is fine-tuned from Llama 2 with supervised instruction fine-tuning and linear RoPE scaling. The training data is around 125K conversations collected from ShareGPT.com. These conversations are packed into sequences that contain 16K tokens each. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation ![Evaluation Results](https://github.com/lm-sys/lm-sys.github.io/blob/main/public/images/webdata/vicuna_v1.5_eval.png?raw=true) Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
Vargol/lcm_sdxl_full_model
Vargol
2023-11-10T15:37:05Z
2
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-11-10T11:40:02Z
--- license: openrail++ tags: - text-to-image - stable-diffusion --- This is a copy of the sdxl base (stabilityai/stable-diffusion-xl-base-1.0) with the unet replaced with the LCM distilled unet (latent-consistency/lcm-sdxl) and scheduler config set to default to the LCM Scheduler. This makes LCM SDXL run as a standard Diffusion Pipeline ```py from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained( "Vargol/lcm_sdxl_full_model", variant='fp16', torch_dtype=torch.float16 ).to("mps") prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" generator = torch.manual_seed(0) image = pipe( prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 ).images[0] image.save('distilled.png') ```
nickrobinson/distilbert-base-uncased-finetuned-emotion
nickrobinson
2023-11-10T15:29:30Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-10T15:21:15Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9235 - name: F1 type: f1 value: 0.9232572625951749 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2213 - Accuracy: 0.9235 - F1: 0.9233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8544 | 1.0 | 250 | 0.3350 | 0.902 | 0.9008 | | 0.2605 | 2.0 | 500 | 0.2213 | 0.9235 | 0.9233 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
Spleonard1/my_awesome_qa_model
Spleonard1
2023-11-10T15:17:09Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-11-10T15:14:27Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - squad model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.6857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 2.4027 | | 2.7785 | 2.0 | 500 | 1.7883 | | 2.7785 | 3.0 | 750 | 1.6857 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
darshsingh1/sqlcoder2-fasttrain_7k
darshsingh1
2023-11-10T15:14:03Z
0
0
null
[ "safetensors", "generated_from_trainer", "dataset:mpachauri/DatasetTrimmed", "base_model:meta-llama/Llama-2-7b-hf", "base_model:finetune:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2023-11-08T13:15:13Z
--- base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer model-index: - name: sqlcoder2-fasttrain_7k results: [] datasets: - mpachauri/DatasetTrimmed --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sqlcoder2-fasttrain_7k This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.5 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.4687 | 0.65 | 500 | nan | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
piotrklima/ppo-LunarLander-v2
piotrklima
2023-11-10T15:04:44Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-10T15:04:24Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 260.77 +/- 13.88 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AntoineD/MiniLM_uncased_classification_tools_qlora_fr
AntoineD
2023-11-10T15:01:45Z
0
0
null
[ "generated_from_trainer", "base_model:microsoft/MiniLM-L12-H384-uncased", "base_model:finetune:microsoft/MiniLM-L12-H384-uncased", "license:mit", "region:us" ]
null
2023-11-10T14:59:12Z
--- license: mit base_model: microsoft/MiniLM-L12-H384-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: MiniLM_uncased_classification_tools_qlora_fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MiniLM_uncased_classification_tools_qlora_fr This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7495 - Accuracy: 0.5 - Learning Rate: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 24 - eval_batch_size: 192 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Rate | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 7 | 2.0845 | 0.075 | 0.0001 | | No log | 2.0 | 14 | 2.0852 | 0.075 | 0.0001 | | No log | 3.0 | 21 | 2.0851 | 0.075 | 0.0001 | | No log | 4.0 | 28 | 2.0855 | 0.075 | 0.0001 | | No log | 5.0 | 35 | 2.0858 | 0.075 | 0.0001 | | No log | 6.0 | 42 | 2.0861 | 0.075 | 9e-05 | | No log | 7.0 | 49 | 2.0863 | 0.075 | 0.0001 | | No log | 8.0 | 56 | 2.0859 | 0.075 | 0.0001 | | No log | 9.0 | 63 | 2.0856 | 0.075 | 0.0001 | | No log | 10.0 | 70 | 2.0855 | 0.075 | 0.0001 | | No log | 11.0 | 77 | 2.0848 | 0.075 | 0.0001 | | No log | 12.0 | 84 | 2.0844 | 0.075 | 8e-05 | | No log | 13.0 | 91 | 2.0829 | 0.075 | 0.0001 | | No log | 14.0 | 98 | 2.0825 | 0.075 | 0.0001 | | No log | 15.0 | 105 | 2.0811 | 0.1 | 0.0001 | | No log | 16.0 | 112 | 2.0793 | 0.175 | 0.0001 | | No log | 17.0 | 119 | 2.0771 | 0.125 | 0.0001 | | No log | 18.0 | 126 | 2.0750 | 0.175 | 7e-05 | | No log | 19.0 | 133 | 2.0723 | 0.175 | 0.0001 | | No log | 20.0 | 140 | 2.0683 | 0.175 | 0.0001 | | No log | 21.0 | 147 | 2.0637 | 0.175 | 0.0001 | | No log | 22.0 | 154 | 2.0574 | 0.175 | 0.0001 | | No log | 23.0 | 161 | 2.0507 | 0.2 | 0.0001 | | No log | 24.0 | 168 | 2.0419 | 0.325 | 6e-05 | | No log | 25.0 | 175 | 2.0318 | 0.35 | 0.0001 | | No log | 26.0 | 182 | 2.0214 | 0.4 | 0.0001 | | No log | 27.0 | 189 | 2.0083 | 0.4 | 0.0001 | | No log | 28.0 | 196 | 1.9949 | 0.4 | 0.0001 | | No log | 29.0 | 203 | 1.9781 | 0.4 | 0.0001 | | No log | 30.0 | 210 | 1.9609 | 0.4 | 5e-05 | | No log | 31.0 | 217 | 1.9475 | 0.425 | 0.0000 | | No log | 32.0 | 224 | 1.9317 | 0.45 | 0.0000 | | No log | 33.0 | 231 | 1.9131 | 0.45 | 0.0000 | | No log | 34.0 | 238 | 1.9015 | 0.475 | 0.0000 | | No log | 35.0 | 245 | 1.8906 | 0.5 | 0.0000 | | No log | 36.0 | 252 | 1.8740 | 0.475 | 4e-05 | | No log | 37.0 | 259 | 1.8613 | 0.5 | 0.0000 | | No log | 38.0 | 266 | 1.8552 | 0.525 | 0.0000 | | No log | 39.0 | 273 | 1.8389 | 0.5 | 0.0000 | | No log | 40.0 | 280 | 1.8302 | 0.5 | 0.0000 | | No log | 41.0 | 287 | 1.8228 | 0.5 | 0.0000 | | No log | 42.0 | 294 | 1.8244 | 0.525 | 3e-05 | | No log | 43.0 | 301 | 1.8048 | 0.5 | 0.0000 | | No log | 44.0 | 308 | 1.7944 | 0.525 | 0.0000 | | No log | 45.0 | 315 | 1.7929 | 0.5 | 0.0000 | | No log | 46.0 | 322 | 1.7904 | 0.5 | 0.0000 | | No log | 47.0 | 329 | 1.7810 | 0.5 | 0.0000 | | No log | 48.0 | 336 | 1.7790 | 0.5 | 2e-05 | | No log | 49.0 | 343 | 1.7758 | 0.5 | 0.0000 | | No log | 50.0 | 350 | 1.7677 | 0.525 | 0.0000 | | No log | 51.0 | 357 | 1.7626 | 0.525 | 0.0000 | | No log | 52.0 | 364 | 1.7579 | 0.525 | 0.0000 | | No log | 53.0 | 371 | 1.7552 | 0.525 | 0.0000 | | No log | 54.0 | 378 | 1.7544 | 0.525 | 1e-05 | | No log | 55.0 | 385 | 1.7523 | 0.525 | 0.0000 | | No log | 56.0 | 392 | 1.7510 | 0.525 | 0.0000 | | No log | 57.0 | 399 | 1.7501 | 0.525 | 5e-06 | | No log | 58.0 | 406 | 1.7498 | 0.525 | 0.0000 | | No log | 59.0 | 413 | 1.7496 | 0.525 | 0.0000 | | No log | 60.0 | 420 | 1.7495 | 0.5 | 0.0 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.14.1
AntoineD/MiniLM_uncased_classification_tools_fr
AntoineD
2023-11-10T14:56:00Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:microsoft/MiniLM-L12-H384-uncased", "base_model:finetune:microsoft/MiniLM-L12-H384-uncased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-10T14:50:18Z
--- license: mit base_model: microsoft/MiniLM-L12-H384-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: MiniLM_uncased_classification_tools_fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MiniLM_uncased_classification_tools_fr This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3165 - Accuracy: 0.925 - Learning Rate: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 24 - eval_batch_size: 192 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Rate | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 7 | 2.0607 | 0.35 | 0.0001 | | No log | 2.0 | 14 | 1.9111 | 0.425 | 0.0001 | | No log | 3.0 | 21 | 1.6543 | 0.45 | 0.0001 | | No log | 4.0 | 28 | 1.4578 | 0.525 | 0.0001 | | No log | 5.0 | 35 | 1.3136 | 0.65 | 0.0001 | | No log | 6.0 | 42 | 1.2160 | 0.7 | 9e-05 | | No log | 7.0 | 49 | 1.0786 | 0.725 | 0.0001 | | No log | 8.0 | 56 | 1.0171 | 0.675 | 0.0001 | | No log | 9.0 | 63 | 0.9491 | 0.7 | 0.0001 | | No log | 10.0 | 70 | 0.8773 | 0.75 | 0.0001 | | No log | 11.0 | 77 | 0.8019 | 0.75 | 0.0001 | | No log | 12.0 | 84 | 0.7436 | 0.775 | 8e-05 | | No log | 13.0 | 91 | 0.6747 | 0.825 | 0.0001 | | No log | 14.0 | 98 | 0.7357 | 0.775 | 0.0001 | | No log | 15.0 | 105 | 0.5386 | 0.85 | 0.0001 | | No log | 16.0 | 112 | 0.6222 | 0.85 | 0.0001 | | No log | 17.0 | 119 | 0.6284 | 0.85 | 0.0001 | | No log | 18.0 | 126 | 0.4489 | 0.9 | 7e-05 | | No log | 19.0 | 133 | 0.6431 | 0.85 | 0.0001 | | No log | 20.0 | 140 | 0.6064 | 0.85 | 0.0001 | | No log | 21.0 | 147 | 0.6948 | 0.825 | 0.0001 | | No log | 22.0 | 154 | 0.5535 | 0.85 | 0.0001 | | No log | 23.0 | 161 | 0.4672 | 0.875 | 0.0001 | | No log | 24.0 | 168 | 0.4797 | 0.875 | 6e-05 | | No log | 25.0 | 175 | 0.4908 | 0.9 | 0.0001 | | No log | 26.0 | 182 | 0.5879 | 0.85 | 0.0001 | | No log | 27.0 | 189 | 0.6601 | 0.85 | 0.0001 | | No log | 28.0 | 196 | 0.6036 | 0.85 | 0.0001 | | No log | 29.0 | 203 | 0.5495 | 0.85 | 0.0001 | | No log | 30.0 | 210 | 0.5135 | 0.85 | 5e-05 | | No log | 31.0 | 217 | 0.4767 | 0.875 | 0.0000 | | No log | 32.0 | 224 | 0.4431 | 0.9 | 0.0000 | | No log | 33.0 | 231 | 0.4681 | 0.875 | 0.0000 | | No log | 34.0 | 238 | 0.5612 | 0.85 | 0.0000 | | No log | 35.0 | 245 | 0.4495 | 0.9 | 0.0000 | | No log | 36.0 | 252 | 0.4384 | 0.9 | 4e-05 | | No log | 37.0 | 259 | 0.4378 | 0.875 | 0.0000 | | No log | 38.0 | 266 | 0.4104 | 0.875 | 0.0000 | | No log | 39.0 | 273 | 0.5060 | 0.875 | 0.0000 | | No log | 40.0 | 280 | 0.4756 | 0.875 | 0.0000 | | No log | 41.0 | 287 | 0.4558 | 0.875 | 0.0000 | | No log | 42.0 | 294 | 0.4458 | 0.9 | 3e-05 | | No log | 43.0 | 301 | 0.3969 | 0.875 | 0.0000 | | No log | 44.0 | 308 | 0.4762 | 0.875 | 0.0000 | | No log | 45.0 | 315 | 0.4891 | 0.875 | 0.0000 | | No log | 46.0 | 322 | 0.4460 | 0.9 | 0.0000 | | No log | 47.0 | 329 | 0.3892 | 0.925 | 0.0000 | | No log | 48.0 | 336 | 0.4267 | 0.9 | 2e-05 | | No log | 49.0 | 343 | 0.3327 | 0.9 | 0.0000 | | No log | 50.0 | 350 | 0.3225 | 0.925 | 0.0000 | | No log | 51.0 | 357 | 0.3223 | 0.925 | 0.0000 | | No log | 52.0 | 364 | 0.3136 | 0.95 | 0.0000 | | No log | 53.0 | 371 | 0.3109 | 0.925 | 0.0000 | | No log | 54.0 | 378 | 0.3142 | 0.9 | 1e-05 | | No log | 55.0 | 385 | 0.3168 | 0.925 | 0.0000 | | No log | 56.0 | 392 | 0.3163 | 0.925 | 0.0000 | | No log | 57.0 | 399 | 0.3174 | 0.925 | 5e-06 | | No log | 58.0 | 406 | 0.3185 | 0.925 | 0.0000 | | No log | 59.0 | 413 | 0.3168 | 0.925 | 0.0000 | | No log | 60.0 | 420 | 0.3165 | 0.925 | 0.0 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.14.1
linoyts/huggy_v28
linoyts
2023-11-10T14:49:55Z
9
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-11-10T14:09:52Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A webpage in the style of <s0><s1><s2> tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - LinoyTsaban/huggy_v28 These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on A webpage in the style of <s0><s1><s2> using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
quentino/ppo-LunarLander-v2
quentino
2023-11-10T14:45:07Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-10T14:44:47Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 257.12 +/- 19.75 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
blabla1233/ppo-LunarLander-v2
blabla1233
2023-11-10T14:44:31Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-10T14:43:50Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 250.88 +/- 14.95 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ChrisEsworthy/Covid_Misinformation_Model
ChrisEsworthy
2023-11-10T14:35:54Z
6
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:spencer-gable-cook/COVID-19_Misinformation_Detector", "base_model:finetune:spencer-gable-cook/COVID-19_Misinformation_Detector", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-04-12T11:55:41Z
--- license: mit tags: - generated_from_trainer base_model: spencer-gable-cook/COVID-19_Misinformation_Detector model-index: - name: Covid_Misinformation_Model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Covid_Misinformation_Model This model is a fine-tuned version of [spencer-gable-cook/COVID-19_Misinformation_Detector](https://huggingface.co/spencer-gable-cook/COVID-19_Misinformation_Detector) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1213 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.27.4 - Pytorch 1.9.0+cu111 - Datasets 2.11.0 - Tokenizers 0.12.1
NesrineBannour/CAS-privacy-preserving-model
NesrineBannour
2023-11-10T14:29:32Z
0
0
transformers
[ "transformers", "biomedical", "clinical", "pytorch", "camembert", "token-classification", "fr", "dataset:bigbio/cas", "license:cc-by-sa-4.0", "region:us" ]
token-classification
2023-10-10T11:34:38Z
--- license: cc-by-sa-4.0 datasets: - bigbio/cas language: - fr metrics: - f1 - precision - recall library_name: transformers tags: - biomedical - clinical - pytorch - camembert pipeline_tag: token-classification inference: false --- # Privacy-preserving mimic models for clinical named entity recognition in French <!-- ## Paper abstract --> In this [paper](https://doi.org/10.1016/j.jbi.2022.104073), we propose a Privacy-Preserving Mimic Models architecture that enables the generation of shareable models using the *mimic learning* approach. The idea of mimic learning is to annotate unlabeled public data through a *private teacher model* trained on the original sensitive data. The newly labeled public dataset is then used to train the *student models*. These generated *student models* could be shared without sharing the data itself or exposing the *private teacher model* that was directly built on this data. # CAS Privacy-Preserving Named Entity Recognition (NER) Mimic Model <!-- Provide a quick summary of what the model is/does. --> To generate the CAS Privacy-Preserving Mimic Model, we used a *private teacher model* to annotate the unlabeled [CAS clinical French corpus](https://aclanthology.org/W18-5614/). The *private teacher model* is an NER model trained on the [MERLOT clinical corpus](https://link.springer.com/article/10.1007/s10579-017-9382-y) and could not be shared. Using the produced [silver annotations](https://zenodo.org/records/6451361), we train the CAS *student model*, namely the CAS Privacy-Preserving NER Mimic Model. This model might be viewed as a knowledge transfer process between the *teacher* and the *student model* in a privacy-preserving manner. We share only the weights of the CAS *student model*, which is trained on silver-labeled publicly released data. We argue that no potential attack could reveal information about sensitive private data using the silver annotations generated by the *private teacher model* on publicly available non-sensitive data. Our model is constructed based on [CamemBERT](https://huggingface.co/camembert) model using the Natural language structuring ([NLstruct](https://github.com/percevalw/nlstruct)) library that implements NER models that handle nested entities. - **Paper:** [Privacy-preserving mimic models for clinical named entity recognition in French](https://doi.org/10.1016/j.jbi.2022.104073) - **Produced gold and silver annotations for the [DEFT](https://deft.lisn.upsaclay.fr/2020/) and [CAS](https://aclanthology.org/W18-5614/) French clinical corpora:** https://zenodo.org/records/6451361 - **Developed by:** [Nesrine Bannour](https://github.com/NesrineBannour), [Perceval Wajsbürt](https://github.com/percevalw), [Bastien Rance](https://team.inria.fr/heka/fr/team-members/rance/), [Xavier Tannier](http://xavier.tannier.free.fr/) and [Aurélie Névéol](https://perso.limsi.fr/neveol/) - **Language:** French - **License:** cc-by-sa-4.0 <!-- ## Model Sources --> <!-- Provide the basic links for the model. --> <!-- ## Training Details <!-- ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> <!-- ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> <!-- #### Training Hyperparameters --> # Download the CAS Privacy-Preserving NER Mimic Model ```python fasttext_url = hf_hub_url(repo_id="NesrineBannour/CAS-privacy-preserving-model", filename="CAS-privacy-preserving-model_fasttext.txt") urllib.request.urlretrieve(fasttext_url, fasttext_url.split('/')[-1]) model_url = hf_hub_url(repo_id="NesrineBannour/CAS-privacy-preserving-model", filename="CAS-privacy-preserving-model.ckpt") urllib.request.urlretrieve(model_url, "path/to/your/folder/"+ model_url.split('/')[-1]) path_checkpoint = "path/to/your/folder/"+ model_url.split('/')[-1] ``` ## 1. Load and use the model using only NLstruct [NLstruct](https://github.com/percevalw/nlstruct) is the Python library we used to generate our CAS privacy-preserving NER mimic model and that handles nested entities. ### Install the NLstruct library ``` pip install nlstruct==0.1.0 ``` ### Use the model ```python from nlstruct import load_pretrained from nlstruct.datasets import load_from_brat, export_to_brat ner_model = load_pretrained(path_checkpoint) test_data = load_from_brat("path/to/brat/test") test_predictions = ner_model.predict(test_data) # Export the predictions into the BRAT standoff format export_to_brat(test_predictions, filename_prefix="path/to/exported_brat") ``` ## 2. Load the model using NLstruct and use it with the Medkit library [Medkit](https://github.com/TeamHeka/medkit) is a Python library for facilitating the extraction of features from various modalities of patient data, including textual data. ### Install the Medkit library ``` python -m pip install 'medkit-lib' ``` ### Use the model Our model could be implemented as a Medkit operation module as follows: ```python import os from nlstruct import load_pretrained import urllib.request from huggingface_hub import hf_hub_url from medkit.io.brat import BratInputConverter, BratOutputConverter from medkit.core import Attribute from medkit.core.text import NEROperation,Entity,Span,Segment, span_utils class CAS_matcher(NEROperation): def __init__(self): # Load the fasttext file fasttext_url = hf_hub_url(repo_id="NesrineBannour/CAS-privacy-preserving-model", filename="CAS-privacy-preserving-model_fasttext.txt") if not os.path.exists("CAS-privacy-preserving-model_fasttext.txt"): urllib.request.urlretrieve(fasttext_url, fasttext_url.split('/')[-1]) # Load the model model_url = hf_hub_url(repo_id="NesrineBannour/CAS-privacy-preserving-model", filename="CAS-privacy-preserving-model.ckpt") if not os.path.exists("ner_model/CAS-privacy-preserving-model.ckpt"): urllib.request.urlretrieve(model_url, "ner_model/"+ model_url.split('/')[-1]) path_checkpoint = "ner_model/"+ model_url.split('/')[-1] self.model = load_pretrained(path_checkpoint) self.model.eval() def run(self, segments): """Return entities for each match in `segments`. Parameters ---------- segments: List of segments into which to look for matches. Returns ------- List[Entity] Entities found in `segments`. """ # get an iterator to all matches, grouped by segment entities = [] for segment in segments: matches = self.model.predict({"doc_id":segment.uid,"text":segment.text}) entities.extend([entity for entity in self._matches_to_entities(matches, segment) ]) return entities def _matches_to_entities(self, matches, segment: Segment): for match in matches["entities"]: text_all,spans_all = [],[] for fragment in match["fragments"]: text, spans = span_utils.extract( segment.text, segment.spans, [(fragment["begin"], fragment["end"])] ) text_all.append(text) spans_all.extend(spans) text_all = "".join(text_all) entity = Entity( label=match["label"], text=text_all, spans=spans_all, ) score_attr = Attribute( label="confidence", value=float(match["confidence"]), #metadata=dict(model=self.model.path_checkpoint), ) entity.attrs.add(score_attr) yield entity brat_converter = BratInputConverter() docs = brat_converter.load("path/to/brat/test") matcher = CAS_matcher() for doc in docs: entities = matcher.run([doc.raw_segment]) for ent in entities: doc.anns.add(ent) brat_output_converter = BratOutputConverter(attrs=[]) # To keep the same document names in the output folder doc_names = [os.path.splitext(os.path.basename(doc.metadata["path_to_text"]))[0] for doc in docs] brat_output_converter.save(docs, dir_path="path/to/exported_brat, doc_names=doc_names) ``` <!-- ## Evaluation of test data <!-- This section describes the evaluation protocols and provides the results. --> <!-- #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> <!-- [More Information Needed] ### Results [More Information Needed] #### Summary --> ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions are estimated using the [Carbontracker](https://github.com/lfwa/carbontracker) tool. The used version at the time of our experiments computes its estimates by using the average carbon intensity in European Union in 2017 instead of the France value (294.21 gCO<sub>2</sub>eq/kWh vs. 85 gCO<sub>2</sub>eq/kWh). Therefore, our reported carbon footprint of training both the private model that generated the silver annotations and the CAS student model is overestimated. - **Hardware Type:** GPU NVIDIA GTX 1080 Ti - **Compute Region:** Gif-sur-Yvette, Île-de-France, France - **Carbon Emitted:** 292 gCO<sub>2</sub>eq ## Acknowledgements We thank the institutions and colleagues who made it possible to use the datasets described in this study: the Biomedical Informatics Department at the Rouen University Hospital provided access to the LERUDI corpus, and Dr. Grabar (Université de Lille, CNRS, STL) granted permission to use the DEFT/CAS corpus. We would also like to thank the ITMO Cancer Aviesan for funding our research, and the [HeKA research team](https://team.inria.fr/heka/) for integrating our model into their library [Medkit]((https://github.com/TeamHeka/medkit)). ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you use this model in your research, please make sure to cite our paper: ```bibtex @article{BANNOUR2022104073, title = {Privacy-preserving mimic models for clinical named entity recognition in French}, journal = {Journal of Biomedical Informatics}, volume = {130}, pages = {104073}, year = {2022}, issn = {1532-0464}, doi = {https://doi.org/10.1016/j.jbi.2022.104073}, url = {https://www.sciencedirect.com/science/article/pii/S1532046422000892}} } ``` <!-- ## Bias, Risks, and Limitations --> <!-- This section is meant to convey both technical and sociotechnical limitations. --> <!-- [More Information Needed] -->
voxxer/ppo-SnowballTarget
voxxer
2023-11-10T14:26:21Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-11-10T14:26:18Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: voxxer/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
alicelouis/Swin2e-4Lion
alicelouis
2023-11-10T14:26:03Z
7
0
transformers
[ "transformers", "safetensors", "swin", "image-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-10T14:12:49Z
--- license: mit metrics: - accuracy --- from transformers import AutoImageProcessor, SwinForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("alicelouis/Swin2e-4Lion") model = SwinForImageClassification.from_pretrained("alicelouis/Swin2e-4Lion") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx])
hkivancoral/hushem_1x_deit_tiny_sgd_lr0001_fold5
hkivancoral
2023-11-10T14:23:39Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-tiny-patch16-224", "base_model:finetune:facebook/deit-tiny-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-10T14:21:48Z
--- license: apache-2.0 base_model: facebook/deit-tiny-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_deit_tiny_sgd_lr0001_fold5 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.14634146341463414 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr0001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5789 - Accuracy: 0.1463 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6455 | 0.1220 | | 1.6035 | 2.0 | 12 | 1.6420 | 0.1220 | | 1.6035 | 3.0 | 18 | 1.6386 | 0.1463 | | 1.6142 | 4.0 | 24 | 1.6353 | 0.1463 | | 1.5857 | 5.0 | 30 | 1.6321 | 0.1463 | | 1.5857 | 6.0 | 36 | 1.6289 | 0.1463 | | 1.5718 | 7.0 | 42 | 1.6259 | 0.1463 | | 1.5718 | 8.0 | 48 | 1.6232 | 0.1463 | | 1.5833 | 9.0 | 54 | 1.6206 | 0.1463 | | 1.5737 | 10.0 | 60 | 1.6178 | 0.1463 | | 1.5737 | 11.0 | 66 | 1.6153 | 0.1463 | | 1.5614 | 12.0 | 72 | 1.6128 | 0.1220 | | 1.5614 | 13.0 | 78 | 1.6104 | 0.1220 | | 1.5648 | 14.0 | 84 | 1.6081 | 0.1220 | | 1.5575 | 15.0 | 90 | 1.6060 | 0.1220 | | 1.5575 | 16.0 | 96 | 1.6040 | 0.1220 | | 1.5452 | 17.0 | 102 | 1.6020 | 0.1220 | | 1.5452 | 18.0 | 108 | 1.6002 | 0.1220 | | 1.5768 | 19.0 | 114 | 1.5984 | 0.1220 | | 1.5464 | 20.0 | 120 | 1.5966 | 0.1220 | | 1.5464 | 21.0 | 126 | 1.5950 | 0.1220 | | 1.5149 | 22.0 | 132 | 1.5934 | 0.1220 | | 1.5149 | 23.0 | 138 | 1.5920 | 0.1220 | | 1.6056 | 24.0 | 144 | 1.5905 | 0.1220 | | 1.5161 | 25.0 | 150 | 1.5892 | 0.1220 | | 1.5161 | 26.0 | 156 | 1.5879 | 0.1220 | | 1.519 | 27.0 | 162 | 1.5868 | 0.1220 | | 1.519 | 28.0 | 168 | 1.5857 | 0.1220 | | 1.5531 | 29.0 | 174 | 1.5848 | 0.1220 | | 1.5347 | 30.0 | 180 | 1.5839 | 0.1220 | | 1.5347 | 31.0 | 186 | 1.5831 | 0.1220 | | 1.5238 | 32.0 | 192 | 1.5824 | 0.1220 | | 1.5238 | 33.0 | 198 | 1.5817 | 0.1463 | | 1.5463 | 34.0 | 204 | 1.5811 | 0.1463 | | 1.5219 | 35.0 | 210 | 1.5805 | 0.1463 | | 1.5219 | 36.0 | 216 | 1.5800 | 0.1463 | | 1.5056 | 37.0 | 222 | 1.5797 | 0.1463 | | 1.5056 | 38.0 | 228 | 1.5794 | 0.1463 | | 1.5505 | 39.0 | 234 | 1.5791 | 0.1463 | | 1.5261 | 40.0 | 240 | 1.5790 | 0.1463 | | 1.5261 | 41.0 | 246 | 1.5789 | 0.1463 | | 1.5175 | 42.0 | 252 | 1.5789 | 0.1463 | | 1.5175 | 43.0 | 258 | 1.5789 | 0.1463 | | 1.5317 | 44.0 | 264 | 1.5789 | 0.1463 | | 1.5241 | 45.0 | 270 | 1.5789 | 0.1463 | | 1.5241 | 46.0 | 276 | 1.5789 | 0.1463 | | 1.5533 | 47.0 | 282 | 1.5789 | 0.1463 | | 1.5533 | 48.0 | 288 | 1.5789 | 0.1463 | | 1.4945 | 49.0 | 294 | 1.5789 | 0.1463 | | 1.5379 | 50.0 | 300 | 1.5789 | 0.1463 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
VitaliiVrublevskyi/v10_bert-base-uncased-finetuned-mrpc
VitaliiVrublevskyi
2023-11-10T14:21:33Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-10T14:09:00Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: v10_bert-base-uncased-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8455882352941176 - name: F1 type: f1 value: 0.8923076923076922 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # v10_bert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5079 - Accuracy: 0.8456 - F1: 0.8923 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 79 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 115 | 0.5043 | 0.7574 | 0.8319 | | No log | 2.0 | 230 | 0.4095 | 0.8456 | 0.8919 | | No log | 3.0 | 345 | 0.4298 | 0.8407 | 0.8889 | | No log | 4.0 | 460 | 0.4580 | 0.8529 | 0.8962 | | 0.3409 | 5.0 | 575 | 0.5079 | 0.8456 | 0.8923 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.13.3
nero1342/vcn-7b-v3-500it-crawl
nero1342
2023-11-10T14:21:30Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:vilm/vietcuna-7b-v3", "base_model:adapter:vilm/vietcuna-7b-v3", "region:us" ]
null
2023-11-10T09:08:29Z
--- library_name: peft base_model: vilm/vietcuna-7b-v3 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.2.dev0
afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-GGUF
afrideva
2023-11-10T14:16:19Z
79
4
null
[ "gguf", "ggml", "quantized", "q2_k", "q3_k_m", "q4_k_m", "q5_k_m", "q6_k", "q8_0", "text-generation", "pt", "en", "base_model:cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k", "base_model:quantized:cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k", "license:mit", "region:us" ]
text-generation
2023-11-10T14:12:38Z
--- base_model: cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k inference: false language: - pt - en license: mit model_creator: cnmoro model_name: TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k pipeline_tag: text-generation quantized_by: afrideva tags: - gguf - ggml - quantized - q2_k - q3_k_m - q4_k_m - q5_k_m - q6_k - q8_0 --- # cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-GGUF Quantized GGUF model files for [TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k](https://huggingface.co/cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k) from [cnmoro](https://huggingface.co/cnmoro) | Name | Quant method | Size | | ---- | ---- | ---- | | [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.q2_k.gguf) | q2_k | 482.14 MB | | [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.q3_k_m.gguf) | q3_k_m | 549.85 MB | | [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.q4_k_m.gguf) | q4_k_m | 667.81 MB | | [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.q5_k_m.gguf) | q5_k_m | 782.04 MB | | [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.q6_k.gguf) | q6_k | 903.41 MB | | [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.q8_0.gguf) | q8_0 | 1.17 GB | ## Original Model Card: Finetuned version of PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T, on a Portuguese instruct dataset, using axolotl. v0, v1 and v2 were finetuned for the default 2048 context length. For this v3, I have used the existing v2 and finetuned the model on a 8k context length dataset. It works fairly well, but it's reasoning capabilities are not so strong. It works well for basic RAG / question answering on retrieved content. Prompt format: f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:\n"
TheBloke/Yi-6B-AWQ
TheBloke
2023-11-10T14:15:13Z
7
1
transformers
[ "transformers", "safetensors", "Yi", "text-generation", "custom_code", "base_model:01-ai/Yi-6B", "base_model:quantized:01-ai/Yi-6B", "license:other", "autotrain_compatible", "4-bit", "awq", "region:us" ]
text-generation
2023-11-10T11:33:47Z
--- base_model: 01-ai/Yi-6B inference: false license: other license_link: LICENSE license_name: yi-license model_creator: 01-ai model_name: Yi 6B model_type: yi pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: TheBloke widget: - output: text: " an eerie sense that something is just not right\u2026\nBetween the two\ \ worlds lies The Forgotten Kingdom - home to creatures long since thought extinct\ \ and ancient magic so strong it defies belief! Only here can you find what\ \ has been lost for centuries: An Elixir Of Life which will restore youth and\ \ vitality if only those who seek its power are brave enough to face up against\ \ all manner of dangers lurking in this mysterious land! But beware; some say\ \ there may even exist powerful entities beyond our comprehension whose intentions\ \ towards humanity remain unclear at best ---- they might want nothing more\ \ than destruction itself rather then anything else from their quest after immortality\ \ (and maybe someone should tell them about modern medicine)? In any event though\ \ \u2013 one thing remains true regardless : whether or not success comes easy\ \ depends entirely upon how much effort we put into conquering whatever challenges\ \ lie ahead along with having faith deep down inside ourselves too ;) So let\u2019\ s get started now shall We?" text: There's a place where time stands still. A place of breath taking wonder, but also --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Yi 6B - AWQ - Model creator: [01-ai](https://huggingface.co/01-ai) - Original model: [Yi 6B](https://huggingface.co/01-ai/Yi-6B) <!-- description start --> ## Description This repo contains AWQ model files for [01-ai's Yi 6B](https://huggingface.co/01-ai/Yi-6B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Yi-6B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yi-6B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yi-6B-GGUF) * [01-ai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/01-ai/Yi-6B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Yi-6B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.93 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Yi-6B-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Yi-6B-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/Yi-6B-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''{prompt} ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Yi-6B-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Yi-6B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''{prompt} ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/Yi-6B-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''{prompt} ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: 01-ai's Yi 6B <div align="center"> <img src="./Yi.svg" width="200px"> </div> ## Introduction The **Yi** series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). The first public release contains two bilingual(English/Chinese) base models with the parameter sizes of 6B([`Yi-6B`](https://huggingface.co/01-ai/Yi-6B)) and 34B([`Yi-34B`](https://huggingface.co/01-ai/Yi-34B)). Both of them are trained with 4K sequence length and can be extended to 32K during inference time. The [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K) and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) are base model with 200K context length. ## News - 🎯 **2023/11/06**: The base model of [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K) and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) with 200K context length. - 🎯 **2023/11/02**: The base model of [`Yi-6B`](https://huggingface.co/01-ai/Yi-6B) and [`Yi-34B`](https://huggingface.co/01-ai/Yi-34B). ## Model Performance | Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code | | :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: | | | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - | | LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 | | LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 | | Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 | | Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** | | Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 | | InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 | | Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - | | Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 | | Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 | | Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 | | **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 | | Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 | While benchmarking open-source models, we have observed a disparity between the results generated by our pipeline and those reported in public sources (e.g. OpenCompass). Upon conducting a more in-depth investigation of this difference, we have discovered that various models may employ different prompts, post-processing strategies, and sampling techniques, potentially resulting in significant variations in the outcomes. Our prompt and post-processing strategy remains consistent with the original benchmark, and greedy decoding is employed during evaluation without any post-processing for the generated content. For scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. To evaluate the model's capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score is derived by averaging the scores on the remaining tasks. Since the scores for these two tasks are generally lower than the average, we believe that Falcon-180B's performance was not underestimated. ## Usage Please visit our [github repository](https://github.com/01-ai/Yi) for general guidance on how to use this model. ## Disclaimer Although we use data compliance checking algorithms during the training process to ensure the compliance of the trained model to the best of our ability, due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. ## License The Yi series models are fully open for academic research and free commercial usage with permission via applications. All usage must adhere to the [Model License Agreement 2.0](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE). To apply for the official commercial license, please contact us ([[email protected]](mailto:[email protected])).
zwhe99/wmt21-comet-qe-mqm
zwhe99
2023-11-10T14:15:08Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2023-11-10T13:40:30Z
--- license: apache-2.0 metrics: - comet --- Creator: [Unbabel](https://unbabel.github.io/COMET/html/index.html) The Hub was created to enable the direct usage of the wmt21-comet-qe-mqm model with Python from the Hub. Code example: ```python pip install --upgrade pip # ensures that pip is current pip install unbabel-comet from comet import download_model, load_from_checkpoint model_path = download_model("zwhe99/wmt21-comet-qe-mqm") model = load_from_checkpoint(model_path) data = [ { "src": "Dem Feuer konnte Einhalt geboten werden", "mt": "The fire could be stopped", "ref": "They were able to control the fire." }, { "src": "Schulen und Kindergärten wurden eröffnet.", "mt": "Schools and kindergartens were open", "ref": "Schools and kindergartens opened" } ] model_output = model.predict(data, batch_size=8, gpus=1) print (model_output) ```
camilon/clinical_longformer_same_tokens_1epochs_150k
camilon
2023-11-10T14:14:28Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "longformer", "fill-mask", "generated_from_trainer", "base_model:camilon/clinical_longformer_same_tokens_1epochs_100k", "base_model:finetune:camilon/clinical_longformer_same_tokens_1epochs_100k", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-11-10T11:05:54Z
--- base_model: camilon/clinical_longformer_same_tokens_1epochs_100k tags: - generated_from_trainer model-index: - name: clinical_longformer_same_tokens_1epochs_150k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clinical_longformer_same_tokens_1epochs_150k This model is a fine-tuned version of [camilon/clinical_longformer_same_tokens_1epochs_100k](https://huggingface.co/camilon/clinical_longformer_same_tokens_1epochs_100k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7029 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1112 | 0.18 | 65 | 1.7542 | | 1.7975 | 0.37 | 130 | 1.7314 | | 1.9007 | 0.55 | 195 | 1.7349 | | 1.9029 | 0.74 | 260 | 1.7021 | | 1.7688 | 0.92 | 325 | 1.7029 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
blueapple8259/ANHSY_0.1
blueapple8259
2023-11-10T14:03:35Z
61
0
transformers
[ "transformers", "safetensors", "gptj", "text-generation", "ko", "dataset:beomi/KoAlpaca-v1.1a", "dataset:royboy0416/ko-alpaca", "dataset:maywell/ko_wikidata_QA", "dataset:nlpai-lab/kullm-v2", "dataset:mssongit/KorfinQA", "dataset:kyujinpy/OpenOrca-KO", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-11-10T09:39:22Z
--- license: cc-by-sa-4.0 datasets: - beomi/KoAlpaca-v1.1a - royboy0416/ko-alpaca - maywell/ko_wikidata_QA - nlpai-lab/kullm-v2 - mssongit/KorfinQA - kyujinpy/OpenOrca-KO language: - ko --- [kogpt-j-base](https://huggingface.co/heegyu/kogpt-j-base)모델을 [여](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a)[러](https://huggingface.co/datasets/royboy0416/ko-alpaca) [데](https://huggingface.co/datasets/maywell/ko_wikidata_QA)[이](https://huggingface.co/datasets/nlpai-lab/kullm-v2)[터](https://huggingface.co/datasets/mssongit/KorfinQA)[셋](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO)을 사용해서 16k step만큼 학습시킨 모델입니다. 추가로 능지이슈가 있는 관계로 생성이 완료된 이후에 eos토큰 대신에 <끝>이 나옵니다. 프롬프트: ``` 당신은 사람들을 도와주는 인공지능 비서입니다. 질문을 읽고 알맞은 답변을 제공하세요. ### 질문: {prompt} ### 답변: ``` 데이터셋: [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) [royboy0416/ko-alpaca](https://huggingface.co/datasets/royboy0416/ko-alpaca) [maywell/ko_wikidata_QA](https://huggingface.co/datasets/maywell/ko_wikidata_QA) [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) [mssongit/KorfinQA](https://huggingface.co/datasets/mssongit/KorfinQA) [kyujinpy/OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO)
Aykill02/ppo-Huggy
Aykill02
2023-11-10T13:57:49Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-11-10T13:57:44Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Aykill02/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CountingMstar/ai-tutor-bert-model
CountingMstar
2023-11-10T13:52:18Z
7
3
transformers
[ "transformers", "safetensors", "bert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2023-11-10T10:26:03Z
# AI Tutor BERT This model is a BERT model fine-tuned on artificial intelligence (AI) related terms and explanations. With the increasing interest in artificial intelligence, many people are taking AI-related courses and projects. However, as a graduate student in artificial intelligence, it's not common to find useful resources that are easy for AI beginners to understand. Furthermore, personalized lessons tailored to individual levels and fields are often lacking, making it difficult for many people to start learning about artificial intelligence. To address these challenges, our team has created a language model that plays the role of a tutor in the field of AI terminology. Details about the model type, training dataset, and usage are explained below, so please read them carefully and be sure to try it out. ## How to use? <img src="https://github.com/CountingMstar/AI_BERT/assets/90711707/45afcd24-7ef9-4149-85d4-2236e23fbf69" width="1400" height="700"/> https://huggingface.co/spaces/pseudolab/AI_Tutor_BERT As shown above, you can input passages (context) related to artificial intelligence and questions about terms. Upon pressing "Submit," you will receive corresponding explanations and answers on the right side. (This model only supports English.) ## Model https://huggingface.co/bert-base-uncased For the model, I used BERT, which is one of the most famous natural language processing models developed by Google. For more detailed information, please refer to the website mentioned above. To make the question-answering more like a private tutoring experience, I utilized a specialized Question and Answering model within BERT. Here's how you can load it: ``` from transformers import BertForQuestionAnswering model = BertForQuestionAnswering.from_pretrained("bert-base-uncased") ``` https://huggingface.co/CountingMstar/ai-tutor-bert-model Afterwards, I fine-tuned the original BertForQuestionAnswering model using the artificial intelligence-related datasets for my project on creating an AI tutoring model. You can find the fine-tuned AI Tutor BERT model at the link provided, and the usage in Python is as follows. ``` from transformers import BertForQuestionAnswering model = BertForQuestionAnswering.from_pretrained("CountingMstar/ai-tutor-bert-model") ``` ## Dataset ### Wikipedia https://en.wikipedia.org/wiki/Main_Page ### activeloop https://www.activeloop.ai/resources/glossary/arima-models/ ### Adrien Beaulieu https://product.house/100-ai-glossary-terms-explained-to-the-rest-of-us/ ``` Context: 'Feature engineering or feature extraction or feature discovery is the process of extracting features (characteristics, properties, attributes) from raw data. Due to deep learning networks, such as convolutional neural networks, that are able to learn features by themselves, domain-specific-based feature engineering has become obsolete for vision and speech processing. Other examples of features in physics include the construction of dimensionless numbers such as Reynolds number in fluid dynamics; then Nusselt number in heat transfer; Archimedes number in sedimentation; construction of first approximations of the solution such as analytical strength of materials solutions in mechanics, etc..' Question: 'What is large language model?' Answer: 'A large language model (LLM) is a type of language model notable for its ability to achieve general-purpose language understanding and generation.' ``` The training dataset consists of three components: context, questions, and answers, all related to artificial intelligence. The response (correct answer) data is included within the context data, and the sentence order in the context data has been rearranged to augment the dataset. The question data is focused on artificial intelligence terms as the topic. You can refer to the example above for better understanding. In total, there are over 3,300 data points, stored in pickle files in the 'data' folder. The data has been extracted and processed using HTML from sources such as Wikipedia and other websites. The sources are as mentioned above. ## Training and Result https://github.com/CountingMstar/AI_BERT/blob/main/MY_AI_BERT_final.ipynb The training process involves loading data from the 'data' folder and utilizing the BERT Question and Answering model. Detailed instructions for model training and usage can be found in the link provided above. ``` N_EPOCHS = 10 optim = AdamW(model.parameters(), lr=5e-5) ``` I used 10 epochs for training, and I employed the Adam optimizer with a learning rate of 5e-5. <img src="https://github.com/CountingMstar/AI_BERT/assets/90711707/72142ff8-f5c8-47ea-9f19-1e6abb4072cd" width="500" height="400"/> <img src="https://github.com/CountingMstar/AI_BERT/assets/90711707/2dd78573-34eb-4ce9-ad4d-2237fc7a5b1e" width="500" height="400"/> The results, as shown in the graphs above, indicate that, at the last epoch, the loss is 6.917126256477786, and the accuracy is 0.9819078947368421, demonstrating that the model has been trained quite effectively. Thank you. --- # AI Tutor BERT (인공지능 과외 선생님 BERT) 이 모델은 인공지능(AI) 관련 용어 및 설명을 파인튜닝(fine-tuning)한 BERT 모델입니다. 최근 인공지능에 관한 관심이 높아지면서 많은 사람이 인공지능 관련 수업 및 프로젝트를 진행하고 있습니다. 그러나 인공지능 관련 대학원생으로서 이러한 수요에 비해 인공지능 초보자들이 잘 알아들을 수 있는 유용한 자료는 흔치 않습니다. 더불어 각자의 수준과 분야에 개인화된 강의 또한 부족한 상황이어서 많은 사람들이 인공지능 학습을 시작하기 어려워하고 있습니다. 이러한 문제를 해결하고자, 저희 팀은 인공지능 용어 도메인에서 과외 선생님 역할을 하는 언어모델을 만들었습니다. 모델의 종류, 학습 데이터셋, 사용법 등이 아래에 설명되어 있으니 자세히 읽어보시고, 꼭 사용해 보시기 바랍니다. ## How to use? <img src="https://github.com/CountingMstar/AI_BERT/assets/90711707/45afcd24-7ef9-4149-85d4-2236e23fbf69" width="1400" height="700"/> https://huggingface.co/spaces/pseudolab/AI_Tutor_BERT 위 그림과 같이 인공지능관련 지문(문맥)과 용어 관련 질문을 입력해주고 Submit을 눌러주면, 오른쪽에 해당 용어에 대한 설명 답변이 나옵니다. ## Model https://huggingface.co/bert-base-uncased 모델의 경우 자연어 처리 모델 중 가장 유명한 Google에서 개발한 BERT를 사용했습니다. 자세한 설명은 위 사이트를 참고하시기 바랍니다. 질의응답이 주인 과외 선생님답게, BERT 중에서도 질의응답에 특화된 Question and Answering 모델을 사용하였습니다. 불러오는 법은 다음과 같습니다. ``` from transformers import BertForQuestionAnswering model = BertForQuestionAnswering.from_pretrained("bert-base-uncased") ``` https://huggingface.co/CountingMstar/ai-tutor-bert-model 이후 오리지널 BertForQuestionAnswering 모델을 이 프로젝트 주제인 인공지능 과외 선생님 모델로 만들기 위해 아래의 인공지능 관련 데이터셋으로 파인튜닝을 해줬습니다. 이렇게 파인튜닝된 AI Tutor BERT 모델은 위 링크에서 찾아보실 수 있으며, 파이썬에서의 사용방법은 아래와 같습니다. ``` from transformers import BertForQuestionAnswering model = BertForQuestionAnswering.from_pretrained("CountingMstar/ai-tutor-bert-model") ``` ## Dataset ### Wikipedia https://en.wikipedia.org/wiki/Main_Page ### activeloop https://www.activeloop.ai/resources/glossary/arima-models/ ### Adrien Beaulieu https://product.house/100-ai-glossary-terms-explained-to-the-rest-of-us/ ``` Context: 'Feature engineering or feature extraction or feature discovery is the process of extracting features (characteristics, properties, attributes) from raw data. Due to deep learning networks, such as convolutional neural networks, that are able to learn features by themselves, domain-specific-based feature engineering has become obsolete for vision and speech processing. Other examples of features in physics include the construction of dimensionless numbers such as Reynolds number in fluid dynamics; then Nusselt number in heat transfer; Archimedes number in sedimentation; construction of first approximations of the solution such as analytical strength of materials solutions in mechanics, etc..' Question: 'What is large language model?' Answer: 'A large language model (LLM) is a type of language model notable for its ability to achieve general-purpose language understanding and generation.' ``` 학습 데이터셋은 인공지능 관련 문맥, 질문, 그리고 응답 이렇게 3가지로 구성이 되어있습니다. 응답(정답) 데이터는 문맥 데이터 안에 포함되어 있고, 문맥 데이터의 문장 순서를 바꿔주어 데이터를 증강하였습니다. 질문 데이터는 주제가 되는 인공지능 용어로 설정했습니다. 위의 예시를 보시면 이해하시기 편하실 겁니다. 총 데이터 수는 3300여 개로 data 폴더에 pickle 파일 형태로 저장되어 있고, 데이터는 Wikipedia 및 다른 사이트들을 에서 html을 이용하여 추출 및 가공하여 제작하였습니다. 해당 출처는 위와 같습니다. ## Training and Result https://github.com/CountingMstar/AI_BERT/blob/main/MY_AI_BERT_final.ipynb 학습 방식은 data 폴더의 데이터와 BERT Question and Answering 모델을 불어와 진행됩니다. 자세한 모델 학습 및 사용법은 위의 링크에 설명되어 있습니다. ``` N_EPOCHS = 10 optim = AdamW(model.parameters(), lr=5e-5) ``` 에포크(epoch)는 10을 사용했으며, 아담 옵티마이져와 러닝레이트는 5e-5를 사용했습니다. <img src="https://github.com/CountingMstar/AI_BERT/assets/90711707/72142ff8-f5c8-47ea-9f19-1e6abb4072cd" width="500" height="400"/> <img src="https://github.com/CountingMstar/AI_BERT/assets/90711707/2dd78573-34eb-4ce9-ad4d-2237fc7a5b1e" width="500" height="400"/> 결과는 위 그래프들과 같이 마지막 에포크 기준 loss = 6.917126256477786, accuracy = 0.9819078947368421로 상당히 학습이 잘 된 모습을 보여줍니다. 감사합니다.
LazzeKappa/L09
LazzeKappa
2023-11-10T13:52:10Z
0
0
null
[ "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:finetune:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2023-11-04T14:13:54Z
--- base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer model-index: - name: L09 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # L09 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0490 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.433 | 1.0 | 318 | 0.4347 | | 0.1729 | 2.0 | 636 | 0.1838 | | 0.0597 | 3.0 | 954 | 0.0642 | | 0.044 | 4.0 | 1272 | 0.0490 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
alperengozeten/bert-turkish-emotion-deprecated
alperengozeten
2023-11-10T13:38:25Z
4
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:dbmdz/bert-base-turkish-cased", "base_model:finetune:dbmdz/bert-base-turkish-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-10T13:34:39Z
--- license: mit base_model: dbmdz/bert-base-turkish-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3687 - Accuracy: 0.9130 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.8572 | 0.14 | 200 | 1.4831 | 0.4714 | | 1.2551 | 0.27 | 400 | 0.9639 | 0.6684 | | 0.886 | 0.41 | 600 | 0.7681 | 0.7507 | | 0.7313 | 0.55 | 800 | 0.5526 | 0.8317 | | 0.5804 | 0.69 | 1000 | 0.5308 | 0.8312 | | 0.5407 | 0.82 | 1200 | 0.4486 | 0.8595 | | 0.502 | 0.96 | 1400 | 0.5216 | 0.8516 | | 0.3737 | 1.1 | 1600 | 0.4527 | 0.8763 | | 0.3367 | 1.23 | 1800 | 0.4716 | 0.8544 | | 0.3272 | 1.37 | 2000 | 0.3905 | 0.8862 | | 0.2988 | 1.51 | 2200 | 0.3661 | 0.8926 | | 0.298 | 1.64 | 2400 | 0.4301 | 0.8898 | | 0.2856 | 1.78 | 2600 | 0.3944 | 0.8943 | | 0.2832 | 1.92 | 2800 | 0.3608 | 0.8979 | | 0.2483 | 2.06 | 3000 | 0.3757 | 0.8987 | | 0.1699 | 2.19 | 3200 | 0.3802 | 0.9100 | | 0.1433 | 2.33 | 3400 | 0.4144 | 0.9114 | | 0.1826 | 2.47 | 3600 | 0.3533 | 0.9124 | | 0.159 | 2.6 | 3800 | 0.3708 | 0.9107 | | 0.1601 | 2.74 | 4000 | 0.3775 | 0.9118 | | 0.1442 | 2.88 | 4200 | 0.3687 | 0.9130 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
lmqg/mt5-small-zhquad-qg-ae-trimmed-50000
lmqg
2023-11-10T13:38:15Z
3
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-10T10:38:39Z
# Vocabulary Trimmed [lmqg/mt5-small-zhquad-qg-ae](https://huggingface.co/lmqg/mt5-small-zhquad-qg-ae): `lmqg/mt5-small-zhquad-qg-ae-trimmed-50000` This model is a trimmed version of [lmqg/mt5-small-zhquad-qg-ae](https://huggingface.co/lmqg/mt5-small-zhquad-qg-ae) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | lmqg/mt5-small-zhquad-qg-ae | lmqg/mt5-small-zhquad-qg-ae-trimmed-50000 | |:---------------------------|:------------------------------|:--------------------------------------------| | parameter_size_full | 300,165,504 | 95,264,128 | | parameter_size_embedding | 256,103,424 | 51,202,048 | | vocab_size | 250,101 | 50,002 | | compression_rate_full | 100.0 | 31.74 | | compression_rate_embedding | 100.0 | 19.99 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | zh | vocabtrimmer/mc4_validation | text | zh | validation | 50000 | 2 |
zwhe99/wmt21-comet-qe-da
zwhe99
2023-11-10T13:36:18Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2023-06-12T18:37:56Z
--- license: apache-2.0 metrics: - comet --- Creator: [Unbabel](https://unbabel.github.io/COMET/html/index.html) The Hub was created to enable the direct usage of the wmt21-comet-qe-da model with Python from the Hub. Code example: ```python pip install --upgrade pip # ensures that pip is current pip install unbabel-comet from comet import download_model, load_from_checkpoint model_path = download_model("zwhe99/wmt21-comet-qe-da") model = load_from_checkpoint(model_path) data = [ { "src": "Dem Feuer konnte Einhalt geboten werden", "mt": "The fire could be stopped", "ref": "They were able to control the fire." }, { "src": "Schulen und Kindergärten wurden eröffnet.", "mt": "Schools and kindergartens were open", "ref": "Schools and kindergartens opened" } ] model_output = model.predict(data, batch_size=8, gpus=1) print (model_output) ```
Around6827/AdvertLlama-13b-chat
Around6827
2023-11-10T13:33:36Z
0
0
peft
[ "peft", "pytorch", "llama", "arxiv:1910.09700", "base_model:NousResearch/Llama-2-13b-hf", "base_model:adapter:NousResearch/Llama-2-13b-hf", "8-bit", "bitsandbytes", "region:us" ]
null
2023-10-02T20:19:50Z
--- library_name: peft base_model: NousResearch/Llama-2-13b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0
Jasonaron/ppo-Huggy
Jasonaron
2023-11-10T13:32:00Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-11-10T13:31:55Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Jasonaron/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Juliacnc/dqn-SpaceInvadersNoFrameskip
Juliacnc
2023-11-10T13:27:13Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-10T13:26:39Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 664.00 +/- 139.66 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Juliacnc -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Juliacnc -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Juliacnc ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
mrorii/kenlm
mrorii
2023-11-10T13:10:57Z
0
2
null
[ "kenlm", "perplexity", "n-gram", "kneser-ney", "bigscience", "ja", "ko", "dataset:wikipedia", "dataset:oscar", "license:mit", "region:us" ]
null
2023-11-10T05:45:43Z
--- license: mit datasets: - wikipedia - oscar language: - ja - ko tags: - kenlm - perplexity - n-gram - kneser-ney - bigscience --- # KenLM models This repo is a copy of [edugp/kenlm](https://huggingface.co/edugp/kenlm) but for the Japanese and Korean languages. The Wikipedia models were trained using the `20231106` dump.
Ameyapores/self_dreambooth
Ameyapores
2023-11-10T13:10:29Z
3
1
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-11-03T17:36:23Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of a arpx person tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
TheBloke/Yi-34B-AWQ
TheBloke
2023-11-10T12:55:18Z
18
7
transformers
[ "transformers", "safetensors", "Yi", "text-generation", "custom_code", "base_model:01-ai/Yi-34B", "base_model:quantized:01-ai/Yi-34B", "license:other", "autotrain_compatible", "4-bit", "awq", "region:us" ]
text-generation
2023-11-10T11:53:25Z
--- base_model: 01-ai/Yi-34B inference: false license: other license_link: LICENSE license_name: yi-license model_creator: 01-ai model_name: Yi 34B model_type: yi pipeline_tag: text-generation prompt_template: 'Human: {prompt} Assistant: ' quantized_by: TheBloke widget: - output: text: ' of great danger. A place where the very air you breathe could kill you. A place where the only way to survive is to be prepared. The place is called the Arctic. The Arctic is a vast, frozen wilderness. It is a place of extremes. The temperatures can drop to -40 degrees Celsius. The winds can reach speeds of 100 kilometers per hour. The sun can shine for 24 hours a day, or not at all for weeks on end. The Arctic is also a place of great beauty. The ice and snow are a pristine white. The sky is a deep blue. The sunsets are spectacular. But the Arctic is also a place of great danger. The ice can be treacherous. The winds can be deadly. The sun can be blinding. The Arctic is a place where the only way to survive is to be prepared. The Arctic is a place of extremes. The temperatures can drop to -40 degrees Celsius. The winds can reach speeds of 100 kilometers per hour. The sun can shine for 24 hours a day, or not at all for weeks on end. The Arctic is a place of great beauty. The ice and snow are a' text: There's a place where time stands still. A place of breath taking wonder, but also --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Yi 34B - AWQ - Model creator: [01-ai](https://huggingface.co/01-ai) - Original model: [Yi 34B](https://huggingface.co/01-ai/Yi-34B) <!-- description start --> ## Description This repo contains AWQ model files for [01-ai's Yi 34B](https://huggingface.co/01-ai/Yi-34B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Yi-34B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yi-34B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yi-34B-GGUF) * [01-ai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/01-ai/Yi-34B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Yi ``` Human: {prompt} Assistant: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Yi-34B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 19.23 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Yi-34B-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Yi-34B-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/Yi-34B-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''Human: {prompt} Assistant: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Yi-34B-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Yi-34B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Human: {prompt} Assistant: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/Yi-34B-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''Human: {prompt} Assistant: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: 01-ai's Yi 34B <div align="center"> <img src="./Yi.svg" width="200px"> </div> ## Introduction The **Yi** series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). The first public release contains two bilingual(English/Chinese) base models with the parameter sizes of 6B([`Yi-6B`](https://huggingface.co/01-ai/Yi-6B)) and 34B([`Yi-34B`](https://huggingface.co/01-ai/Yi-34B)). Both of them are trained with 4K sequence length and can be extended to 32K during inference time. The [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K) and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) are base model with 200K context length. ## News - 🎯 **2023/11/06**: The base model of [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K) and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) with 200K context length. - 🎯 **2023/11/02**: The base model of [`Yi-6B`](https://huggingface.co/01-ai/Yi-6B) and [`Yi-34B`](https://huggingface.co/01-ai/Yi-34B). ## Model Performance | Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code | | :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: | | | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - | | LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 | | LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 | | Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 | | Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** | | Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 | | InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 | | Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - | | Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 | | Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 | | Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 | | **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 | | Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 | While benchmarking open-source models, we have observed a disparity between the results generated by our pipeline and those reported in public sources (e.g. OpenCompass). Upon conducting a more in-depth investigation of this difference, we have discovered that various models may employ different prompts, post-processing strategies, and sampling techniques, potentially resulting in significant variations in the outcomes. Our prompt and post-processing strategy remains consistent with the original benchmark, and greedy decoding is employed during evaluation without any post-processing for the generated content. For scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. To evaluate the model's capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score is derived by averaging the scores on the remaining tasks. Since the scores for these two tasks are generally lower than the average, we believe that Falcon-180B's performance was not underestimated. ## Usage Please visit our [github repository](https://github.com/01-ai/Yi) for general guidance on how to use this model. ## Disclaimer Although we use data compliance checking algorithms during the training process to ensure the compliance of the trained model to the best of our ability, due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. ## License The Yi series models are fully open for academic research and free commercial usage with permission via applications. All usage must adhere to the [Model License Agreement 2.0](https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE). To apply for the official commercial license, please contact us ([[email protected]](mailto:[email protected])).
cnmoro/ptt5-base-ptbr-summarization
cnmoro
2023-11-10T12:54:28Z
11
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "summarization", "pt", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2023-11-09T14:33:11Z
--- license: mit language: - pt tags: - summarization ---
abdel1311/q-FrozenLake-v1-4x4-noSlippery
abdel1311
2023-11-10T12:54:24Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-11-10T12:54:22Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="abdel1311/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AntoineD/camembert_squadFR_question_answering_tools_fr
AntoineD
2023-11-10T12:49:42Z
3
0
transformers
[ "transformers", "pytorch", "camembert", "question-answering", "generated_from_trainer", "base_model:AgentPublic/camembert-base-squadFR-fquad-piaf", "base_model:finetune:AgentPublic/camembert-base-squadFR-fquad-piaf", "endpoints_compatible", "region:us" ]
question-answering
2023-11-09T16:21:05Z
--- base_model: etalab-ia/camembert-base-squadFR-fquad-piaf tags: - generated_from_trainer model-index: - name: camembert_squadFR_question_answering_tools_fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # camembert_squadFR_question_answering_tools_fr This model is a fine-tuned version of [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7066 - Learning Rate: 0.0001 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 24 - eval_batch_size: 192 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rate | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 7 | 1.8851 | 0.0001 | | No log | 2.0 | 14 | 1.0053 | 0.0001 | | No log | 3.0 | 21 | 1.4041 | 0.0001 | | No log | 4.0 | 28 | 1.0443 | 0.0001 | | No log | 5.0 | 35 | 0.9649 | 0.0001 | | No log | 6.0 | 42 | 1.6062 | 9e-05 | | No log | 7.0 | 49 | 1.3678 | 0.0001 | | No log | 8.0 | 56 | 1.2775 | 0.0001 | | No log | 9.0 | 63 | 1.1872 | 0.0001 | | No log | 10.0 | 70 | 1.4032 | 0.0001 | | No log | 11.0 | 77 | 1.7370 | 0.0001 | | No log | 12.0 | 84 | 2.1178 | 8e-05 | | No log | 13.0 | 91 | 1.7708 | 0.0001 | | No log | 14.0 | 98 | 1.7846 | 0.0001 | | No log | 15.0 | 105 | 1.7066 | 0.0001 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.14.1
dylan1/cache_model1
dylan1
2023-11-10T12:48:24Z
0
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "region:us" ]
null
2023-11-08T09:18:51Z
--- library_name: peft base_model: /kaggle/input/best-llm-starter-pack/nous_hermes/Nous-Hermes-Llama2-13b/snapshots/8f95aa9cd207db7b24179fc779c2b8973e71bee2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.1
tourist800/ORKG-zephyr-7b-alpha-finetune
tourist800
2023-11-10T12:46:54Z
1
0
peft
[ "peft", "arxiv:1910.09700", "base_model:HuggingFaceH4/zephyr-7b-alpha", "base_model:adapter:HuggingFaceH4/zephyr-7b-alpha", "region:us" ]
null
2023-11-10T12:46:16Z
--- library_name: peft base_model: HuggingFaceH4/zephyr-7b-alpha --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0
hkivancoral/hushem_1x_deit_tiny_sgd_lr001_fold4
hkivancoral
2023-11-10T12:44:24Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-tiny-patch16-224", "base_model:finetune:facebook/deit-tiny-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-10T12:42:20Z
--- license: apache-2.0 base_model: facebook/deit-tiny-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_deit_tiny_sgd_lr001_fold4 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.47619047619047616 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2453 - Accuracy: 0.4762 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5316 | 0.1905 | | 1.5831 | 2.0 | 12 | 1.5015 | 0.1905 | | 1.5831 | 3.0 | 18 | 1.4762 | 0.1667 | | 1.5346 | 4.0 | 24 | 1.4541 | 0.1905 | | 1.5081 | 5.0 | 30 | 1.4366 | 0.2381 | | 1.5081 | 6.0 | 36 | 1.4200 | 0.2857 | | 1.4598 | 7.0 | 42 | 1.4054 | 0.2857 | | 1.4598 | 8.0 | 48 | 1.3912 | 0.2857 | | 1.4326 | 9.0 | 54 | 1.3788 | 0.3095 | | 1.3952 | 10.0 | 60 | 1.3675 | 0.3571 | | 1.3952 | 11.0 | 66 | 1.3571 | 0.3810 | | 1.3596 | 12.0 | 72 | 1.3480 | 0.3810 | | 1.3596 | 13.0 | 78 | 1.3393 | 0.3810 | | 1.363 | 14.0 | 84 | 1.3316 | 0.3810 | | 1.3301 | 15.0 | 90 | 1.3251 | 0.4048 | | 1.3301 | 16.0 | 96 | 1.3178 | 0.4048 | | 1.3095 | 17.0 | 102 | 1.3113 | 0.4048 | | 1.3095 | 18.0 | 108 | 1.3061 | 0.4048 | | 1.3044 | 19.0 | 114 | 1.3014 | 0.4048 | | 1.2995 | 20.0 | 120 | 1.2970 | 0.4048 | | 1.2995 | 21.0 | 126 | 1.2921 | 0.4048 | | 1.2717 | 22.0 | 132 | 1.2882 | 0.4048 | | 1.2717 | 23.0 | 138 | 1.2838 | 0.4048 | | 1.2926 | 24.0 | 144 | 1.2801 | 0.4048 | | 1.2458 | 25.0 | 150 | 1.2760 | 0.4048 | | 1.2458 | 26.0 | 156 | 1.2723 | 0.4286 | | 1.2592 | 27.0 | 162 | 1.2686 | 0.4286 | | 1.2592 | 28.0 | 168 | 1.2659 | 0.4286 | | 1.2355 | 29.0 | 174 | 1.2631 | 0.4286 | | 1.2526 | 30.0 | 180 | 1.2605 | 0.4286 | | 1.2526 | 31.0 | 186 | 1.2579 | 0.4524 | | 1.2439 | 32.0 | 192 | 1.2557 | 0.4524 | | 1.2439 | 33.0 | 198 | 1.2536 | 0.4524 | | 1.1949 | 34.0 | 204 | 1.2519 | 0.4524 | | 1.2285 | 35.0 | 210 | 1.2501 | 0.4524 | | 1.2285 | 36.0 | 216 | 1.2488 | 0.4524 | | 1.2118 | 37.0 | 222 | 1.2477 | 0.4524 | | 1.2118 | 38.0 | 228 | 1.2468 | 0.4762 | | 1.2136 | 39.0 | 234 | 1.2462 | 0.4762 | | 1.2259 | 40.0 | 240 | 1.2457 | 0.4762 | | 1.2259 | 41.0 | 246 | 1.2454 | 0.4762 | | 1.2204 | 42.0 | 252 | 1.2453 | 0.4762 | | 1.2204 | 43.0 | 258 | 1.2453 | 0.4762 | | 1.2061 | 44.0 | 264 | 1.2453 | 0.4762 | | 1.2146 | 45.0 | 270 | 1.2453 | 0.4762 | | 1.2146 | 46.0 | 276 | 1.2453 | 0.4762 | | 1.2137 | 47.0 | 282 | 1.2453 | 0.4762 | | 1.2137 | 48.0 | 288 | 1.2453 | 0.4762 | | 1.2227 | 49.0 | 294 | 1.2453 | 0.4762 | | 1.2027 | 50.0 | 300 | 1.2453 | 0.4762 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
tuanio/fine-w2v2base-bs16-ep200-lr2e-05-linguistic-rmsnorm-focal_ctc_a0.99_g2-0.05_10_0.004_40-final
tuanio
2023-11-10T12:42:47Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "generated_from_trainer", "base_model:nguyenvulebinh/wav2vec2-base-vietnamese-250h", "base_model:finetune:nguyenvulebinh/wav2vec2-base-vietnamese-250h", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2023-11-10T10:28:16Z
--- license: cc-by-nc-4.0 base_model: nguyenvulebinh/wav2vec2-base-vietnamese-250h tags: - generated_from_trainer metrics: - wer model-index: - name: fine-w2v2base-bs16-ep200-lr2e-05-linguistic-rmsnorm-focal_ctc_a0.99_g2-0.05_10_0.004_40-final results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-w2v2base-bs16-ep200-lr2e-05-linguistic-rmsnorm-focal_ctc_a0.99_g2-0.05_10_0.004_40-final This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.2098 - Wer: 0.1111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:-------:| | 2070.5209 | 0.94 | 50 | 1050.6107 | 15.9194 | | 1893.1078 | 1.89 | 100 | 856.5042 | 15.9084 | | 1402.1547 | 2.83 | 150 | 638.6856 | 15.9718 | | 1145.3789 | 3.77 | 200 | 541.5690 | 15.9801 | | 921.5391 | 4.72 | 250 | 388.8102 | 15.9792 | | 272.1148 | 5.66 | 300 | 86.7826 | 1.0 | | 111.1403 | 6.6 | 350 | 82.6906 | 1.0 | | 104.6625 | 7.55 | 400 | 80.7376 | 1.0 | | 99.9559 | 8.49 | 450 | 79.5480 | 1.0 | | 99.3013 | 9.43 | 500 | 78.0927 | 1.0 | | 97.293 | 10.38 | 550 | 76.9956 | 1.0 | | 98.062 | 11.32 | 600 | 76.4573 | 1.0 | | 96.0945 | 12.26 | 650 | 75.6026 | 1.0 | | 95.9684 | 13.21 | 700 | 75.6452 | 1.0 | | 94.6767 | 14.15 | 750 | 75.9780 | 1.0 | | 93.8767 | 15.09 | 800 | 77.8212 | 0.9992 | | 91.9104 | 16.04 | 850 | 78.9036 | 1.0012 | | 89.6319 | 16.98 | 900 | 74.7778 | 0.9991 | | 87.5197 | 17.92 | 950 | 73.3647 | 0.9993 | | 86.9794 | 18.87 | 1000 | 71.1035 | 0.9998 | | 84.6621 | 19.81 | 1050 | 67.8181 | 0.9998 | | 81.1323 | 20.75 | 1100 | 53.9551 | 0.8985 | | 56.2753 | 21.7 | 1150 | 25.1096 | 0.3755 | | 32.2015 | 22.64 | 1200 | 15.0509 | 0.2327 | | 22.3559 | 23.58 | 1250 | 11.2544 | 0.1796 | | 17.766 | 24.53 | 1300 | 9.1760 | 0.1561 | | 15.3377 | 25.47 | 1350 | 7.9738 | 0.1502 | | 13.0247 | 26.42 | 1400 | 7.1329 | 0.1391 | | 12.047 | 27.36 | 1450 | 6.4816 | 0.1260 | | 10.874 | 28.3 | 1500 | 6.0990 | 0.1260 | | 10.3489 | 29.25 | 1550 | 6.0334 | 0.1276 | | 9.5992 | 30.19 | 1600 | 5.6333 | 0.1204 | | 8.7578 | 31.13 | 1650 | 5.4704 | 0.1115 | | 8.8291 | 32.08 | 1700 | 5.2070 | 0.1063 | | 8.1346 | 33.02 | 1750 | 5.1131 | 0.1092 | | 7.7698 | 33.96 | 1800 | 4.9853 | 0.1059 | | 7.2385 | 34.91 | 1850 | 4.9884 | 0.1092 | | 7.2942 | 35.85 | 1900 | 4.9169 | 0.1004 | | 7.1231 | 36.79 | 1950 | 4.7677 | 0.1009 | | 6.6689 | 37.74 | 2000 | 4.8707 | 0.1078 | | 6.6686 | 38.68 | 2050 | 4.6952 | 0.1023 | | 6.3965 | 39.62 | 2100 | 4.9130 | 0.1065 | | 6.2281 | 40.57 | 2150 | 4.6463 | 0.0982 | | 5.8648 | 41.51 | 2200 | 4.8060 | 0.1083 | | 5.8669 | 42.45 | 2250 | 4.7226 | 0.1088 | | 5.4889 | 43.4 | 2300 | 4.6982 | 0.1104 | | 5.5636 | 44.34 | 2350 | 4.6289 | 0.1089 | | 5.512 | 45.28 | 2400 | 4.4615 | 0.1035 | | 5.1006 | 46.23 | 2450 | 4.4759 | 0.0973 | | 5.04 | 47.17 | 2500 | 4.4644 | 0.1072 | | 4.7533 | 48.11 | 2550 | 4.3047 | 0.1011 | | 4.811 | 49.06 | 2600 | 4.3995 | 0.0978 | | 4.4865 | 50.0 | 2650 | 4.2904 | 0.0945 | | 4.41 | 50.94 | 2700 | 4.2735 | 0.0919 | | 4.6938 | 51.89 | 2750 | 4.2735 | 0.0929 | | 4.2775 | 52.83 | 2800 | 4.3565 | 0.0944 | | 4.4868 | 53.77 | 2850 | 4.3067 | 0.0936 | | 4.3502 | 54.72 | 2900 | 4.3263 | 0.1015 | | 3.9422 | 55.66 | 2950 | 4.1456 | 0.0986 | | 3.83 | 56.6 | 3000 | 4.1247 | 0.0994 | | 4.0432 | 57.55 | 3050 | 4.1449 | 0.0943 | | 3.9007 | 58.49 | 3100 | 4.2760 | 0.1001 | | 3.7194 | 59.43 | 3150 | 4.1489 | 0.0938 | | 3.791 | 60.38 | 3200 | 4.1865 | 0.0952 | | 3.439 | 61.32 | 3250 | 4.0903 | 0.0978 | | 3.666 | 62.26 | 3300 | 4.1479 | 0.1019 | | 3.3243 | 63.21 | 3350 | 4.0614 | 0.1013 | | 3.389 | 64.15 | 3400 | 4.0781 | 0.0987 | | 3.3151 | 65.09 | 3450 | 4.2045 | 0.1063 | | 3.6432 | 66.04 | 3500 | 4.2502 | 0.1057 | | 3.3547 | 66.98 | 3550 | 4.0707 | 0.0946 | | 3.323 | 67.92 | 3600 | 4.1075 | 0.0962 | | 3.1881 | 68.87 | 3650 | 4.1951 | 0.0992 | | 3.2008 | 69.81 | 3700 | 4.1416 | 0.0945 | | 3.079 | 70.75 | 3750 | 4.1982 | 0.0923 | | 3.0741 | 71.7 | 3800 | 4.2177 | 0.0985 | | 2.9199 | 72.64 | 3850 | 4.2224 | 0.0969 | | 2.9009 | 73.58 | 3900 | 4.1863 | 0.0956 | | 2.6505 | 74.53 | 3950 | 4.1560 | 0.0987 | | 2.9569 | 75.47 | 4000 | 4.1147 | 0.0888 | | 2.7948 | 76.42 | 4050 | 4.2427 | 0.1057 | | 2.9366 | 77.36 | 4100 | 4.3038 | 0.1091 | | 2.9399 | 78.3 | 4150 | 4.2281 | 0.1020 | | 2.5798 | 79.25 | 4200 | 4.2448 | 0.0980 | | 2.715 | 80.19 | 4250 | 4.1647 | 0.0931 | | 2.615 | 81.13 | 4300 | 4.1305 | 0.0952 | | 2.6131 | 82.08 | 4350 | 4.2630 | 0.0984 | | 2.5931 | 83.02 | 4400 | 4.1665 | 0.1034 | | 2.4909 | 83.96 | 4450 | 4.1648 | 0.0947 | | 2.5452 | 84.91 | 4500 | 4.1319 | 0.1029 | | 2.3713 | 85.85 | 4550 | 4.0906 | 0.1014 | | 2.452 | 86.79 | 4600 | 4.0809 | 0.0968 | | 2.3391 | 87.74 | 4650 | 4.1726 | 0.0990 | | 2.3136 | 88.68 | 4700 | 4.1336 | 0.0933 | | 2.2644 | 89.62 | 4750 | 4.1530 | 0.1041 | | 2.0899 | 90.57 | 4800 | 4.2035 | 0.1102 | | 2.4311 | 91.51 | 4850 | 4.1507 | 0.0989 | | 1.9583 | 92.45 | 4900 | 4.2440 | 0.0996 | | 2.4467 | 93.4 | 4950 | 4.1794 | 0.1077 | | 2.1111 | 94.34 | 5000 | 4.1224 | 0.0926 | | 2.0238 | 95.28 | 5050 | 4.1248 | 0.0948 | | 2.1593 | 96.23 | 5100 | 4.2034 | 0.1085 | | 2.033 | 97.17 | 5150 | 4.1157 | 0.1119 | | 2.0795 | 98.11 | 5200 | 4.1638 | 0.1004 | | 2.0027 | 99.06 | 5250 | 4.1367 | 0.1029 | | 2.0702 | 100.0 | 5300 | 4.1131 | 0.0993 | | 2.0022 | 100.94 | 5350 | 4.0984 | 0.1034 | | 2.0313 | 101.89 | 5400 | 4.1044 | 0.0979 | | 2.0468 | 102.83 | 5450 | 4.1019 | 0.0982 | | 1.9196 | 103.77 | 5500 | 4.1935 | 0.1070 | | 1.8988 | 104.72 | 5550 | 4.1279 | 0.1032 | | 1.9784 | 105.66 | 5600 | 4.1553 | 0.1068 | | 2.0349 | 106.6 | 5650 | 4.1259 | 0.1060 | | 1.6378 | 107.55 | 5700 | 4.1543 | 0.1056 | | 1.7948 | 108.49 | 5750 | 4.1599 | 0.1122 | | 1.8042 | 109.43 | 5800 | 4.1429 | 0.1113 | | 1.7872 | 110.38 | 5850 | 4.1495 | 0.1032 | | 1.8428 | 111.32 | 5900 | 4.1143 | 0.1151 | | 1.8995 | 112.26 | 5950 | 4.1219 | 0.1019 | | 1.7064 | 113.21 | 6000 | 4.1017 | 0.1115 | | 1.5617 | 114.15 | 6050 | 4.0737 | 0.1088 | | 1.7554 | 115.09 | 6100 | 4.1050 | 0.1048 | | 1.7072 | 116.04 | 6150 | 4.1199 | 0.1077 | | 1.6821 | 116.98 | 6200 | 4.1431 | 0.1037 | | 1.6876 | 117.92 | 6250 | 4.1442 | 0.1074 | | 1.6461 | 118.87 | 6300 | 4.1750 | 0.1019 | | 1.5313 | 119.81 | 6350 | 4.1441 | 0.1092 | | 1.7041 | 120.75 | 6400 | 4.1632 | 0.1087 | | 1.6251 | 121.7 | 6450 | 4.1980 | 0.1094 | | 1.6317 | 122.64 | 6500 | 4.1192 | 0.1034 | | 1.5896 | 123.58 | 6550 | 4.1356 | 0.1121 | | 1.5714 | 124.53 | 6600 | 4.1736 | 0.1090 | | 1.3745 | 125.47 | 6650 | 4.2218 | 0.1094 | | 1.7257 | 126.42 | 6700 | 4.2172 | 0.1138 | | 1.524 | 127.36 | 6750 | 4.1964 | 0.1099 | | 1.4954 | 128.3 | 6800 | 4.2411 | 0.1101 | | 1.5402 | 129.25 | 6850 | 4.1481 | 0.1079 | | 1.5668 | 130.19 | 6900 | 4.1864 | 0.1081 | | 1.5251 | 131.13 | 6950 | 4.1792 | 0.1161 | | 1.6132 | 132.08 | 7000 | 4.1093 | 0.1094 | | 1.6573 | 133.02 | 7050 | 4.1153 | 0.1122 | | 1.5327 | 133.96 | 7100 | 4.1231 | 0.1129 | | 1.5617 | 134.91 | 7150 | 4.1707 | 0.1200 | | 1.5798 | 135.85 | 7200 | 4.1301 | 0.1141 | | 1.5294 | 136.79 | 7250 | 4.1376 | 0.1149 | | 1.4742 | 137.74 | 7300 | 4.1316 | 0.1149 | | 1.569 | 138.68 | 7350 | 4.1947 | 0.1154 | | 1.5434 | 139.62 | 7400 | 4.1617 | 0.1130 | | 1.4833 | 140.57 | 7450 | 4.1586 | 0.1187 | | 1.3112 | 141.51 | 7500 | 4.1543 | 0.1125 | | 1.4757 | 142.45 | 7550 | 4.1885 | 0.1127 | | 1.4602 | 143.4 | 7600 | 4.1938 | 0.1185 | | 1.3891 | 144.34 | 7650 | 4.2258 | 0.1134 | | 1.5484 | 145.28 | 7700 | 4.2443 | 0.1130 | | 1.3533 | 146.23 | 7750 | 4.2355 | 0.1064 | | 1.3938 | 147.17 | 7800 | 4.2510 | 0.1087 | | 1.422 | 148.11 | 7850 | 4.2208 | 0.1174 | | 1.2897 | 149.06 | 7900 | 4.2606 | 0.1180 | | 1.4107 | 150.0 | 7950 | 4.2759 | 0.1113 | | 1.3735 | 150.94 | 8000 | 4.2398 | 0.1098 | | 1.4142 | 151.89 | 8050 | 4.2370 | 0.1080 | | 1.3136 | 152.83 | 8100 | 4.2353 | 0.1061 | | 1.4554 | 153.77 | 8150 | 4.2255 | 0.1090 | | 1.4135 | 154.72 | 8200 | 4.2362 | 0.1107 | | 1.3512 | 155.66 | 8250 | 4.2431 | 0.1099 | | 1.3081 | 156.6 | 8300 | 4.2480 | 0.1097 | | 1.2292 | 157.55 | 8350 | 4.2302 | 0.1101 | | 1.3 | 158.49 | 8400 | 4.2558 | 0.1124 | | 1.368 | 159.43 | 8450 | 4.2727 | 0.1082 | | 1.3324 | 160.38 | 8500 | 4.2577 | 0.1121 | | 1.293 | 161.32 | 8550 | 4.2435 | 0.1153 | | 1.2726 | 162.26 | 8600 | 4.2194 | 0.1146 | | 1.3561 | 163.21 | 8650 | 4.2485 | 0.1170 | | 1.2194 | 164.15 | 8700 | 4.2325 | 0.1115 | | 1.3088 | 165.09 | 8750 | 4.2530 | 0.1121 | | 1.3285 | 166.04 | 8800 | 4.2556 | 0.1116 | | 1.2224 | 166.98 | 8850 | 4.2561 | 0.1098 | | 1.3535 | 167.92 | 8900 | 4.2463 | 0.1108 | | 1.2354 | 168.87 | 8950 | 4.2457 | 0.1073 | | 1.2799 | 169.81 | 9000 | 4.2256 | 0.1098 | | 1.2153 | 170.75 | 9050 | 4.2130 | 0.1088 | | 1.1879 | 171.7 | 9100 | 4.1974 | 0.1087 | | 1.2708 | 172.64 | 9150 | 4.2232 | 0.1133 | | 1.3335 | 173.58 | 9200 | 4.2444 | 0.1118 | | 1.3543 | 174.53 | 9250 | 4.2460 | 0.1142 | | 1.3021 | 175.47 | 9300 | 4.2073 | 0.1104 | | 1.2694 | 176.42 | 9350 | 4.2009 | 0.1106 | | 1.3015 | 177.36 | 9400 | 4.2318 | 0.1126 | | 1.2935 | 178.3 | 9450 | 4.2460 | 0.1142 | | 1.2766 | 179.25 | 9500 | 4.2334 | 0.1134 | | 1.1748 | 180.19 | 9550 | 4.2197 | 0.1119 | | 1.2498 | 181.13 | 9600 | 4.2149 | 0.1107 | | 1.2658 | 182.08 | 9650 | 4.2115 | 0.1126 | | 1.3142 | 183.02 | 9700 | 4.2067 | 0.1107 | | 1.2422 | 183.96 | 9750 | 4.2044 | 0.1123 | | 1.2152 | 184.91 | 9800 | 4.2051 | 0.1130 | | 1.2157 | 185.85 | 9850 | 4.2080 | 0.1132 | | 1.1727 | 186.79 | 9900 | 4.2041 | 0.1104 | | 1.2594 | 187.74 | 9950 | 4.2049 | 0.1115 | | 1.3206 | 188.68 | 10000 | 4.2014 | 0.1115 | | 1.1332 | 189.62 | 10050 | 4.2047 | 0.1114 | | 1.2477 | 190.57 | 10100 | 4.2078 | 0.1115 | | 1.2712 | 191.51 | 10150 | 4.2069 | 0.1117 | | 1.1063 | 192.45 | 10200 | 4.2073 | 0.1119 | | 1.3181 | 193.4 | 10250 | 4.2094 | 0.1109 | | 1.1348 | 194.34 | 10300 | 4.2090 | 0.1114 | | 1.224 | 195.28 | 10350 | 4.2065 | 0.1114 | | 1.242 | 196.23 | 10400 | 4.2089 | 0.1112 | | 1.1683 | 197.17 | 10450 | 4.2100 | 0.1113 | | 1.2693 | 198.11 | 10500 | 4.2081 | 0.1109 | | 1.3093 | 199.06 | 10550 | 4.2092 | 0.1109 | | 1.229 | 200.0 | 10600 | 4.2098 | 0.1111 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1 - Datasets 2.14.5 - Tokenizers 0.14.1
hkivancoral/hushem_1x_deit_tiny_sgd_lr001_fold3
hkivancoral
2023-11-10T12:42:14Z
11
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-tiny-patch16-224", "base_model:finetune:facebook/deit-tiny-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-10T12:40:11Z
--- license: apache-2.0 base_model: facebook/deit-tiny-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_deit_tiny_sgd_lr001_fold3 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.4186046511627907 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3137 - Accuracy: 0.4186 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5163 | 0.3023 | | 1.6001 | 2.0 | 12 | 1.4936 | 0.3023 | | 1.6001 | 3.0 | 18 | 1.4729 | 0.3023 | | 1.5411 | 4.0 | 24 | 1.4550 | 0.3023 | | 1.4977 | 5.0 | 30 | 1.4401 | 0.3023 | | 1.4977 | 6.0 | 36 | 1.4267 | 0.3023 | | 1.4396 | 7.0 | 42 | 1.4159 | 0.3023 | | 1.4396 | 8.0 | 48 | 1.4066 | 0.3023 | | 1.4314 | 9.0 | 54 | 1.3991 | 0.3023 | | 1.3704 | 10.0 | 60 | 1.3909 | 0.3023 | | 1.3704 | 11.0 | 66 | 1.3847 | 0.3023 | | 1.3552 | 12.0 | 72 | 1.3793 | 0.3023 | | 1.3552 | 13.0 | 78 | 1.3735 | 0.3256 | | 1.3421 | 14.0 | 84 | 1.3686 | 0.3488 | | 1.3202 | 15.0 | 90 | 1.3638 | 0.3488 | | 1.3202 | 16.0 | 96 | 1.3593 | 0.3721 | | 1.2948 | 17.0 | 102 | 1.3558 | 0.3953 | | 1.2948 | 18.0 | 108 | 1.3518 | 0.3953 | | 1.2928 | 19.0 | 114 | 1.3488 | 0.3953 | | 1.2647 | 20.0 | 120 | 1.3454 | 0.3953 | | 1.2647 | 21.0 | 126 | 1.3427 | 0.3953 | | 1.2556 | 22.0 | 132 | 1.3402 | 0.3953 | | 1.2556 | 23.0 | 138 | 1.3379 | 0.3953 | | 1.253 | 24.0 | 144 | 1.3353 | 0.3953 | | 1.2437 | 25.0 | 150 | 1.3327 | 0.3953 | | 1.2437 | 26.0 | 156 | 1.3306 | 0.4186 | | 1.2239 | 27.0 | 162 | 1.3289 | 0.3953 | | 1.2239 | 28.0 | 168 | 1.3270 | 0.3953 | | 1.2275 | 29.0 | 174 | 1.3251 | 0.3953 | | 1.2028 | 30.0 | 180 | 1.3234 | 0.3953 | | 1.2028 | 31.0 | 186 | 1.3221 | 0.3953 | | 1.202 | 32.0 | 192 | 1.3205 | 0.3953 | | 1.202 | 33.0 | 198 | 1.3191 | 0.3953 | | 1.194 | 34.0 | 204 | 1.3178 | 0.3953 | | 1.1993 | 35.0 | 210 | 1.3169 | 0.4186 | | 1.1993 | 36.0 | 216 | 1.3160 | 0.4186 | | 1.1904 | 37.0 | 222 | 1.3153 | 0.4186 | | 1.1904 | 38.0 | 228 | 1.3147 | 0.4186 | | 1.1785 | 39.0 | 234 | 1.3142 | 0.4186 | | 1.2086 | 40.0 | 240 | 1.3139 | 0.4186 | | 1.2086 | 41.0 | 246 | 1.3138 | 0.4186 | | 1.1893 | 42.0 | 252 | 1.3137 | 0.4186 | | 1.1893 | 43.0 | 258 | 1.3137 | 0.4186 | | 1.2 | 44.0 | 264 | 1.3137 | 0.4186 | | 1.1775 | 45.0 | 270 | 1.3137 | 0.4186 | | 1.1775 | 46.0 | 276 | 1.3137 | 0.4186 | | 1.1852 | 47.0 | 282 | 1.3137 | 0.4186 | | 1.1852 | 48.0 | 288 | 1.3137 | 0.4186 | | 1.1783 | 49.0 | 294 | 1.3137 | 0.4186 | | 1.1702 | 50.0 | 300 | 1.3137 | 0.4186 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
gyro5/dialo-bot
gyro5
2023-11-10T12:27:47Z
12
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-04-11T04:19:32Z
--- tags: - conversational --- # Discord chatbot based on microsoft/DialoGPT-medium This model is used for a Discord chatbot. The model and the chatbot are for testing only.
antonjaragon/emotions_6_classes_small
antonjaragon
2023-11-10T12:19:00Z
19
3
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-08-17T18:53:48Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: emotions_6_classes_small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotions_6_classes_small This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the 'Audio emotions' public dataset, available form https://www.kaggle.com/datasets/uldisvalainis/audio-emotions. 'Surprised' class was discarded due to lack of samples. It achieves the following results on the evaluation set: - Loss: 0.9106 - Accuracy: 0.7920 ## Model description Classifies audios into 6 emotions: - Angry - Happy - Sad - Neutral - Fearful - Disgusted ## Intended uses & limitations This model was trained for educational purposes. ## Training and evaluation data - Training: 80% - Test: 20% ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2009 | 0.99 | 19 | 0.6892 | 0.7891 | | 0.2272 | 1.97 | 38 | 0.7235 | 0.7817 | | 0.2196 | 2.96 | 57 | 0.7027 | 0.7809 | | 0.2402 | 4.0 | 77 | 0.7953 | 0.7592 | | 0.2301 | 4.99 | 96 | 0.7979 | 0.7699 | | 0.1896 | 5.97 | 115 | 0.7533 | 0.7838 | | 0.188 | 6.96 | 134 | 0.7483 | 0.7817 | | 0.1573 | 8.0 | 154 | 0.8200 | 0.7756 | | 0.1576 | 8.99 | 173 | 0.7623 | 0.7944 | | 0.1452 | 9.97 | 192 | 0.7460 | 0.7944 | | 0.1322 | 10.96 | 211 | 0.8031 | 0.7875 | | 0.1353 | 12.0 | 231 | 0.7864 | 0.7883 | | 0.1211 | 12.99 | 250 | 0.7934 | 0.7903 | | 0.1165 | 13.97 | 269 | 0.7734 | 0.7936 | | 0.0928 | 14.96 | 288 | 0.8743 | 0.7842 | | 0.095 | 16.0 | 308 | 0.8483 | 0.7867 | | 0.0824 | 16.99 | 327 | 0.8860 | 0.7850 | | 0.0896 | 17.97 | 346 | 0.8314 | 0.7957 | | 0.0874 | 18.96 | 365 | 0.8164 | 0.7936 | | 0.081 | 20.0 | 385 | 0.8250 | 0.7993 | | 0.0673 | 20.99 | 404 | 0.9118 | 0.7879 | | 0.0716 | 21.97 | 423 | 0.8605 | 0.7912 | | 0.0588 | 22.96 | 442 | 0.8470 | 0.7985 | | 0.0579 | 24.0 | 462 | 0.8906 | 0.7920 | | 0.0511 | 24.99 | 481 | 0.8853 | 0.7969 | | 0.0488 | 25.97 | 500 | 0.8901 | 0.7973 | | 0.0468 | 26.96 | 519 | 0.9083 | 0.7895 | | 0.0505 | 28.0 | 539 | 0.9010 | 0.7903 | | 0.0542 | 28.99 | 558 | 0.8924 | 0.7944 | | 0.0542 | 29.61 | 570 | 0.9106 | 0.7920 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
camilabrion/traductor
camilabrion
2023-11-10T12:17:51Z
0
0
null
[ "es", "license:mit", "region:us" ]
null
2023-11-10T12:02:21Z
--- license: mit language: - es --- Traductor de español colombiano a dialectos: Convierte el español colombiano a español de España, argentino, mexicano y puertorriqueño. Comunícate eficazmente en diferentes regiones Este repositorio alberga un traductor desarrollado con el modelo GPT-3.5 de OpenAI, diseñado para manejar diversas variantes del español. Aquí encontrarás los archivos y el código necesario para implementar y experimentar con el traductor. El proceso de ingeniería de prompts se ha aplicado para optimizar la calidad de las traducciones. Puedes cargar este modelo directamente en Hugging Face a través de la interfaz web o utilizando Git. Consulta las instrucciones detalladas en este README para comenzar a utilizar el traductor y contribuir al desarrollo. ¡Disfruta traduciendo entre diferentes variantes del español de manera eficiente y creativa!
hkivancoral/hushem_1x_deit_tiny_adamax_lr00001_fold1
hkivancoral
2023-11-10T12:14:33Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-tiny-patch16-224", "base_model:finetune:facebook/deit-tiny-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-10T12:12:08Z
--- license: apache-2.0 base_model: facebook/deit-tiny-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_deit_tiny_adamax_lr00001_fold1 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.4222222222222222 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr00001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3005 - Accuracy: 0.4222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 1.4838 | 0.2222 | | No log | 2.0 | 3 | 1.4436 | 0.2222 | | No log | 2.67 | 4 | 1.4334 | 0.1778 | | No log | 4.0 | 6 | 1.4190 | 0.2667 | | No log | 4.67 | 7 | 1.4121 | 0.2889 | | No log | 6.0 | 9 | 1.3991 | 0.3111 | | 1.3869 | 6.67 | 10 | 1.3926 | 0.3333 | | 1.3869 | 8.0 | 12 | 1.3807 | 0.3556 | | 1.3869 | 8.67 | 13 | 1.3748 | 0.3556 | | 1.3869 | 10.0 | 15 | 1.3643 | 0.3778 | | 1.3869 | 10.67 | 16 | 1.3598 | 0.3778 | | 1.3869 | 12.0 | 18 | 1.3511 | 0.4 | | 1.3869 | 12.67 | 19 | 1.3478 | 0.3778 | | 1.1228 | 14.0 | 21 | 1.3405 | 0.4 | | 1.1228 | 14.67 | 22 | 1.3380 | 0.4 | | 1.1228 | 16.0 | 24 | 1.3323 | 0.4222 | | 1.1228 | 16.67 | 25 | 1.3292 | 0.4222 | | 1.1228 | 18.0 | 27 | 1.3250 | 0.4222 | | 1.1228 | 18.67 | 28 | 1.3231 | 0.4222 | | 0.9505 | 20.0 | 30 | 1.3201 | 0.4222 | | 0.9505 | 20.67 | 31 | 1.3189 | 0.4222 | | 0.9505 | 22.0 | 33 | 1.3162 | 0.4222 | | 0.9505 | 22.67 | 34 | 1.3147 | 0.4222 | | 0.9505 | 24.0 | 36 | 1.3120 | 0.4222 | | 0.9505 | 24.67 | 37 | 1.3113 | 0.4222 | | 0.9505 | 26.0 | 39 | 1.3090 | 0.4222 | | 0.8411 | 26.67 | 40 | 1.3078 | 0.4222 | | 0.8411 | 28.0 | 42 | 1.3057 | 0.4222 | | 0.8411 | 28.67 | 43 | 1.3047 | 0.4222 | | 0.8411 | 30.0 | 45 | 1.3028 | 0.4222 | | 0.8411 | 30.67 | 46 | 1.3020 | 0.4222 | | 0.8411 | 32.0 | 48 | 1.3010 | 0.4222 | | 0.8411 | 32.67 | 49 | 1.3007 | 0.4222 | | 0.7881 | 33.33 | 50 | 1.3005 | 0.4222 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
kyujinpy/Korean-OpenOrca-v3
kyujinpy
2023-11-10T12:14:27Z
2,252
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "dataset:kyujinpy/OpenOrca-ko-v3", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-04T07:06:37Z
--- language: - ko datasets: - kyujinpy/OpenOrca-ko-v3 library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** **The license is `cc-by-nc-sa-4.0`.** # **🐳Korean-OpenOrca-13B-v2🐳** ![img](./Korean-OpenOrca.png) ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Model Architecture** Korean-OpenOrca-13B is an auto-regressive language model based on the LLaMA2 transformer architecture. **Repo Link** Github Korean-OpenOrca: [🐳Korean-OpenOrca🐳](https://github.com/Marker-Inc-Korea/Korean-OpenOrca) **Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) **Training Dataset** I use [OpenOrca-ko-v3](https://huggingface.co/datasets/kyujinpy/OpenOrca-ko-v3). Using DeepL, translate about [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca). I use A100 GPU 40GB and COLAB, when trianing. # Model comparisons | Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | --- | --- | --- | --- | --- | --- | --- | | [Korean-OpenOrca-13B🐳] | 48.79 | 43.09 | 54.13 | 40.24 | 45.22 | 61.28 | | [Korean-OpenOrca-13B-v2🐳] | 48.17 | 43.17 | 54.51 | 42.90 | 41.82 | 58.44 | | Korean-OpenOrca-13B-v3🐳 | 48.86 | 43.77 | 54.30 | 41.79 | 43.85 | 60.57 | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/Korean-OpenOrca-13B-v3" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---
KyriaAnnwyn/lora-trained-RobertJones_RV2_novae_long-xl
KyriaAnnwyn
2023-11-10T12:10:53Z
2
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:SG161222/RealVisXL_V2.0", "base_model:adapter:SG161222/RealVisXL_V2.0", "license:openrail++", "region:us" ]
text-to-image
2023-11-10T11:44:35Z
--- license: openrail++ base_model: SG161222/RealVisXL_V2.0 instance_prompt: a photo of RobertJones tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - KyriaAnnwyn/lora-trained-RobertJones_RV2_novae_long-xl These are LoRA adaption weights for SG161222/RealVisXL_V2.0. The weights were trained on a photo of RobertJones using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: True. Special VAE used for training: None.
hkivancoral/hushem_1x_deit_tiny_adamax_lr0001_fold5
hkivancoral
2023-11-10T12:07:48Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-tiny-patch16-224", "base_model:finetune:facebook/deit-tiny-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-10T12:06:18Z
--- license: apache-2.0 base_model: facebook/deit-tiny-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_deit_tiny_adamax_lr0001_fold5 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.6585365853658537 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr0001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8924 - Accuracy: 0.6585 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 1.9330 | 0.2439 | | No log | 2.0 | 3 | 1.4362 | 0.3659 | | No log | 2.67 | 4 | 1.3806 | 0.3902 | | No log | 4.0 | 6 | 1.3304 | 0.4634 | | No log | 4.67 | 7 | 1.3017 | 0.4390 | | No log | 6.0 | 9 | 1.1836 | 0.4878 | | 1.2323 | 6.67 | 10 | 1.1688 | 0.5610 | | 1.2323 | 8.0 | 12 | 1.1361 | 0.5366 | | 1.2323 | 8.67 | 13 | 1.1291 | 0.5366 | | 1.2323 | 10.0 | 15 | 1.0782 | 0.6098 | | 1.2323 | 10.67 | 16 | 1.0358 | 0.6585 | | 1.2323 | 12.0 | 18 | 1.0020 | 0.6098 | | 1.2323 | 12.67 | 19 | 1.0059 | 0.6098 | | 0.3527 | 14.0 | 21 | 0.9293 | 0.6098 | | 0.3527 | 14.67 | 22 | 0.9162 | 0.6341 | | 0.3527 | 16.0 | 24 | 0.9233 | 0.6098 | | 0.3527 | 16.67 | 25 | 0.9213 | 0.6098 | | 0.3527 | 18.0 | 27 | 0.9193 | 0.6098 | | 0.3527 | 18.67 | 28 | 0.9345 | 0.6098 | | 0.04 | 20.0 | 30 | 0.8872 | 0.6585 | | 0.04 | 20.67 | 31 | 0.8549 | 0.6829 | | 0.04 | 22.0 | 33 | 0.8221 | 0.6829 | | 0.04 | 22.67 | 34 | 0.8117 | 0.7073 | | 0.04 | 24.0 | 36 | 0.8041 | 0.7561 | | 0.04 | 24.67 | 37 | 0.8128 | 0.7561 | | 0.04 | 26.0 | 39 | 0.8413 | 0.6829 | | 0.0062 | 26.67 | 40 | 0.8565 | 0.6585 | | 0.0062 | 28.0 | 42 | 0.8789 | 0.6585 | | 0.0062 | 28.67 | 43 | 0.8864 | 0.6585 | | 0.0062 | 30.0 | 45 | 0.8920 | 0.6585 | | 0.0062 | 30.67 | 46 | 0.8925 | 0.6585 | | 0.0062 | 32.0 | 48 | 0.8929 | 0.6585 | | 0.0062 | 32.67 | 49 | 0.8927 | 0.6585 | | 0.0031 | 33.33 | 50 | 0.8924 | 0.6585 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
hkivancoral/hushem_1x_deit_tiny_adamax_lr0001_fold3
hkivancoral
2023-11-10T12:04:37Z
16
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-tiny-patch16-224", "base_model:finetune:facebook/deit-tiny-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-10T12:03:06Z
--- license: apache-2.0 base_model: facebook/deit-tiny-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_deit_tiny_adamax_lr0001_fold3 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.6976744186046512 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr0001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8641 - Accuracy: 0.6977 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 1.6041 | 0.2558 | | No log | 2.0 | 3 | 1.2890 | 0.3953 | | No log | 2.67 | 4 | 1.2944 | 0.3023 | | No log | 4.0 | 6 | 1.2013 | 0.4186 | | No log | 4.67 | 7 | 1.1135 | 0.4186 | | No log | 6.0 | 9 | 1.0796 | 0.5349 | | 1.2559 | 6.67 | 10 | 1.0570 | 0.5581 | | 1.2559 | 8.0 | 12 | 1.1038 | 0.4884 | | 1.2559 | 8.67 | 13 | 1.0764 | 0.4884 | | 1.2559 | 10.0 | 15 | 0.9749 | 0.5349 | | 1.2559 | 10.67 | 16 | 0.9354 | 0.5581 | | 1.2559 | 12.0 | 18 | 0.9274 | 0.6279 | | 1.2559 | 12.67 | 19 | 0.9435 | 0.6512 | | 0.4315 | 14.0 | 21 | 0.9225 | 0.6512 | | 0.4315 | 14.67 | 22 | 0.9168 | 0.6279 | | 0.4315 | 16.0 | 24 | 0.8830 | 0.6279 | | 0.4315 | 16.67 | 25 | 0.8956 | 0.6512 | | 0.4315 | 18.0 | 27 | 0.9038 | 0.6744 | | 0.4315 | 18.67 | 28 | 0.8913 | 0.6744 | | 0.058 | 20.0 | 30 | 0.8683 | 0.6512 | | 0.058 | 20.67 | 31 | 0.8553 | 0.6744 | | 0.058 | 22.0 | 33 | 0.8508 | 0.6977 | | 0.058 | 22.67 | 34 | 0.8546 | 0.6977 | | 0.058 | 24.0 | 36 | 0.8627 | 0.6977 | | 0.058 | 24.67 | 37 | 0.8639 | 0.6977 | | 0.058 | 26.0 | 39 | 0.8636 | 0.7209 | | 0.0086 | 26.67 | 40 | 0.8627 | 0.7209 | | 0.0086 | 28.0 | 42 | 0.8622 | 0.7209 | | 0.0086 | 28.67 | 43 | 0.8622 | 0.6977 | | 0.0086 | 30.0 | 45 | 0.8629 | 0.6977 | | 0.0086 | 30.67 | 46 | 0.8632 | 0.6977 | | 0.0086 | 32.0 | 48 | 0.8638 | 0.6977 | | 0.0086 | 32.67 | 49 | 0.8640 | 0.6977 | | 0.004 | 33.33 | 50 | 0.8641 | 0.6977 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
jonjimenez/transaction-categorization
jonjimenez
2023-11-10T12:02:46Z
127
1
transformers
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-10T11:25:03Z
Este modelo fue construido para categorizar transacciones financieras, principalmente con la información de la entidad de destino como figura en los extractos bancarios. Categoriza en 12 categorías, entre ellas: - Bank Transfers: Label_0 - Entertainment: Label_1 - Food & Drink: Label_2 - General Merchandise: Label_3 - General Services: Label_4 - Government + Non-Profit: Label_5 - Income: Label_6 - Loans: Label_7 - Medical: Label_8 - Rent & Utilities: Label_9 - Transportation: Label_10 - Travel: Label_11
dreMaz/AnimeMangaInpainting
dreMaz
2023-11-10T11:55:04Z
0
7
null
[ "license:mit", "region:us" ]
null
2023-11-10T10:16:10Z
--- license: mit --- lama_large_512px.ckpt is finetuned [big lama](https://github.com/advimman/lama) on 300k manga and anime style data. Significently better than older lama_mpe on manga.
hkivancoral/hushem_1x_deit_tiny_adamax_lr001_fold5
hkivancoral
2023-11-10T11:54:42Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-tiny-patch16-224", "base_model:finetune:facebook/deit-tiny-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-10T11:52:34Z
--- license: apache-2.0 base_model: facebook/deit-tiny-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_deit_tiny_adamax_lr001_fold5 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.6585365853658537 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1657 - Accuracy: 0.6585 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 4.7722 | 0.2439 | | No log | 2.0 | 3 | 1.4567 | 0.2439 | | No log | 2.67 | 4 | 1.8233 | 0.2683 | | No log | 4.0 | 6 | 1.3918 | 0.2439 | | No log | 4.67 | 7 | 1.4247 | 0.2195 | | No log | 6.0 | 9 | 1.3988 | 0.2439 | | 1.9646 | 6.67 | 10 | 1.3700 | 0.3415 | | 1.9646 | 8.0 | 12 | 1.3164 | 0.3902 | | 1.9646 | 8.67 | 13 | 1.2953 | 0.3902 | | 1.9646 | 10.0 | 15 | 1.0825 | 0.5366 | | 1.9646 | 10.67 | 16 | 0.9280 | 0.7561 | | 1.9646 | 12.0 | 18 | 0.9474 | 0.5610 | | 1.9646 | 12.67 | 19 | 0.9791 | 0.5122 | | 1.1934 | 14.0 | 21 | 1.3039 | 0.3902 | | 1.1934 | 14.67 | 22 | 1.3242 | 0.3902 | | 1.1934 | 16.0 | 24 | 0.8880 | 0.6341 | | 1.1934 | 16.67 | 25 | 0.8367 | 0.6341 | | 1.1934 | 18.0 | 27 | 0.8476 | 0.6098 | | 1.1934 | 18.67 | 28 | 0.9406 | 0.5854 | | 0.8297 | 20.0 | 30 | 1.1819 | 0.4878 | | 0.8297 | 20.67 | 31 | 0.9194 | 0.5610 | | 0.8297 | 22.0 | 33 | 0.7486 | 0.6829 | | 0.8297 | 22.67 | 34 | 1.1493 | 0.6341 | | 0.8297 | 24.0 | 36 | 1.2217 | 0.5854 | | 0.8297 | 24.67 | 37 | 0.7746 | 0.6829 | | 0.8297 | 26.0 | 39 | 0.8320 | 0.6585 | | 0.5433 | 26.67 | 40 | 1.2210 | 0.5610 | | 0.5433 | 28.0 | 42 | 1.3782 | 0.5366 | | 0.5433 | 28.67 | 43 | 1.1529 | 0.6098 | | 0.5433 | 30.0 | 45 | 1.0361 | 0.6585 | | 0.5433 | 30.67 | 46 | 1.1089 | 0.6585 | | 0.5433 | 32.0 | 48 | 1.1802 | 0.6098 | | 0.5433 | 32.67 | 49 | 1.1774 | 0.6585 | | 0.2758 | 33.33 | 50 | 1.1657 | 0.6585 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
jake-walker/ppo-CartPole-v1
jake-walker
2023-11-10T11:54:07Z
0
0
null
[ "tensorboard", "CartPole-v1", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-11-10T11:50:08Z
--- tags: - CartPole-v1 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 74.70 +/- 41.05 name: mean_reward verified: false --- # PPO Agent Playing CartPole-v1 This is a trained model of a PPO agent playing CartPole-v1. # Hyperparameters ```python {'exp_name': 'ppo' 'gym_id': 'CartPole-v1' 'learning_rate': 0.00025 'seed': 1 'total_timesteps': 25000 'torch_deterministic': True 'cuda': True 'track': False 'capture_video': False 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_gradient_norm': 0.5 'target_kl': None 'repo_id': 'jake-walker/ppo-CartPole-v1' 'batch_size': 512 'minibatch_size': 128} ```
hkivancoral/hushem_1x_deit_tiny_adamax_lr001_fold2
hkivancoral
2023-11-10T11:47:52Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-tiny-patch16-224", "base_model:finetune:facebook/deit-tiny-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-10T11:45:33Z
--- license: apache-2.0 base_model: facebook/deit-tiny-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_deit_tiny_adamax_lr001_fold2 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.5333333333333333 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4814 - Accuracy: 0.5333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 4.5182 | 0.2444 | | No log | 2.0 | 3 | 1.5416 | 0.2444 | | No log | 2.67 | 4 | 1.5662 | 0.2667 | | No log | 4.0 | 6 | 1.4453 | 0.2444 | | No log | 4.67 | 7 | 1.4082 | 0.2444 | | No log | 6.0 | 9 | 1.3188 | 0.4222 | | 1.9051 | 6.67 | 10 | 1.3266 | 0.3556 | | 1.9051 | 8.0 | 12 | 1.2375 | 0.4667 | | 1.9051 | 8.67 | 13 | 1.3632 | 0.3778 | | 1.9051 | 10.0 | 15 | 1.2064 | 0.4 | | 1.9051 | 10.67 | 16 | 1.5392 | 0.2889 | | 1.9051 | 12.0 | 18 | 1.1260 | 0.4889 | | 1.9051 | 12.67 | 19 | 1.0999 | 0.4667 | | 1.1808 | 14.0 | 21 | 1.2445 | 0.4222 | | 1.1808 | 14.67 | 22 | 1.2069 | 0.4444 | | 1.1808 | 16.0 | 24 | 1.0381 | 0.4889 | | 1.1808 | 16.67 | 25 | 1.0992 | 0.5111 | | 1.1808 | 18.0 | 27 | 1.1085 | 0.5333 | | 1.1808 | 18.67 | 28 | 1.0609 | 0.5111 | | 0.899 | 20.0 | 30 | 1.1754 | 0.5333 | | 0.899 | 20.67 | 31 | 1.1214 | 0.5333 | | 0.899 | 22.0 | 33 | 1.2625 | 0.4889 | | 0.899 | 22.67 | 34 | 1.2586 | 0.5111 | | 0.899 | 24.0 | 36 | 1.3423 | 0.4667 | | 0.899 | 24.67 | 37 | 1.4290 | 0.4667 | | 0.899 | 26.0 | 39 | 1.3722 | 0.5333 | | 0.4924 | 26.67 | 40 | 1.4024 | 0.5111 | | 0.4924 | 28.0 | 42 | 1.3396 | 0.5111 | | 0.4924 | 28.67 | 43 | 1.4100 | 0.4444 | | 0.4924 | 30.0 | 45 | 1.5561 | 0.4889 | | 0.4924 | 30.67 | 46 | 1.5223 | 0.5556 | | 0.4924 | 32.0 | 48 | 1.4581 | 0.5778 | | 0.4924 | 32.67 | 49 | 1.4627 | 0.5556 | | 0.1685 | 33.33 | 50 | 1.4814 | 0.5333 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
Kwangchompoo/DD
Kwangchompoo
2023-11-10T11:47:25Z
0
0
null
[ "region:us" ]
null
2023-11-10T11:40:55Z
(นางแบบ)​+(ใส่ชุดเดรส)​+(แต่งหน้าใสๆ)​+(ยืนถือผลิตภัณฑ์)​+(ยิ้ม)​+(ครีม)​(การจัดแสง)​+(ห้าง)
andrea-coppari/phi-1_5-geodata-finetuning-ita
andrea-coppari
2023-11-10T11:45:24Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:finetune:microsoft/phi-1_5", "license:other", "region:us" ]
null
2023-11-10T11:25:53Z
--- license: other base_model: microsoft/phi-1_5 tags: - generated_from_trainer model-index: - name: phi-1_5-geodata-finetuning-ita results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1_5-geodata-finetuning-ita This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 1500 ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
hkivancoral/hushem_1x_deit_tiny_adamax_lr001_fold1
hkivancoral
2023-11-10T11:45:14Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-tiny-patch16-224", "base_model:finetune:facebook/deit-tiny-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-10T11:42:33Z
--- license: apache-2.0 base_model: facebook/deit-tiny-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_1x_deit_tiny_adamax_lr001_fold1 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.4444444444444444 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5661 - Accuracy: 0.4444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 2.2715 | 0.2667 | | No log | 2.0 | 3 | 2.0269 | 0.4 | | No log | 2.67 | 4 | 1.6111 | 0.2889 | | No log | 4.0 | 6 | 1.4755 | 0.2444 | | No log | 4.67 | 7 | 1.3818 | 0.4667 | | No log | 6.0 | 9 | 1.3523 | 0.3111 | | 1.6844 | 6.67 | 10 | 1.4010 | 0.2444 | | 1.6844 | 8.0 | 12 | 1.2634 | 0.4444 | | 1.6844 | 8.67 | 13 | 1.3983 | 0.4222 | | 1.6844 | 10.0 | 15 | 1.7897 | 0.3778 | | 1.6844 | 10.67 | 16 | 1.7305 | 0.3111 | | 1.6844 | 12.0 | 18 | 1.3560 | 0.4667 | | 1.6844 | 12.67 | 19 | 1.8545 | 0.4222 | | 1.001 | 14.0 | 21 | 2.1000 | 0.3778 | | 1.001 | 14.67 | 22 | 1.2257 | 0.4889 | | 1.001 | 16.0 | 24 | 1.2741 | 0.4444 | | 1.001 | 16.67 | 25 | 1.9098 | 0.3556 | | 1.001 | 18.0 | 27 | 1.4981 | 0.3778 | | 1.001 | 18.67 | 28 | 1.0949 | 0.4222 | | 0.7366 | 20.0 | 30 | 1.1640 | 0.4222 | | 0.7366 | 20.67 | 31 | 1.5156 | 0.3556 | | 0.7366 | 22.0 | 33 | 1.8559 | 0.3556 | | 0.7366 | 22.67 | 34 | 1.5735 | 0.4444 | | 0.7366 | 24.0 | 36 | 1.3202 | 0.4222 | | 0.7366 | 24.67 | 37 | 1.3837 | 0.4222 | | 0.7366 | 26.0 | 39 | 1.6707 | 0.4 | | 0.4908 | 26.67 | 40 | 1.8712 | 0.3778 | | 0.4908 | 28.0 | 42 | 2.1885 | 0.3556 | | 0.4908 | 28.67 | 43 | 2.0505 | 0.3556 | | 0.4908 | 30.0 | 45 | 1.6855 | 0.4 | | 0.4908 | 30.67 | 46 | 1.5304 | 0.4222 | | 0.4908 | 32.0 | 48 | 1.5067 | 0.3778 | | 0.4908 | 32.67 | 49 | 1.5442 | 0.4222 | | 0.3287 | 33.33 | 50 | 1.5661 | 0.4444 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
noamwies/llama-test-gqa-with-better-transformer
noamwies
2023-11-10T11:43:33Z
20
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T10:48:03Z
--- license: apache-2.0 --- A miniature llama model for testing the llama GQA variant in the BetterTransformer framework.
RJuro/kanelsnegl-v0.1
RJuro
2023-11-10T11:22:33Z
23
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "da", "en", "dataset:DDSC/partial-danish-gigaword-no-twitter", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-10T09:27:53Z
--- license: mit datasets: - DDSC/partial-danish-gigaword-no-twitter language: - da - en --- # Model Card for kanelsnegl-v0.1 !!! This model is built for fun and learning and needs much more finetuning !!! <img src="https://huggingface.co/RJuro/kanelsnegl-v0.1/resolve/main/kanelsnegl_banner.png" alt="Kanelsnegl Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> ## Model Description Base model: [Zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) finetuned on [DDSC/partial-danish-gigaword-no-twitter](https://huggingface.co/datasets/DDSC/partial-danish-gigaword-no-twitter). The training involved a maximum length of 512. QLora completion finetuning of all linear layers was also implemented. This model is mostly fun tinkering for personal learning purpose. A heavily instruction-tuned model was used in the hope of transferring some of its behaviors into the Danish finetune. The model often produces somewhat fun halucinations and instruction following is not really happening that well, but there may be ways to fine-tune it in a supervised manner for more controlled behavior. ## Usage ```python from transformers import AutoTokenizer, TextStreamer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("RJuro/kanelsnegl-v0.1", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("RJuro/kanelsnegl-v0.1") streamer = TextStreamer(tokenizer, skip_special_tokens=True) tokens = tokenizer( """ Hvordan kan vi udvikle gode open source sprogmodeller i Danmark? """, return_tensors='pt' ).input_ids.cuda() generation_output = model.generate( tokens, streamer=streamer, max_length=512, repetition_penalty=1.2, # A value > 1.0 discourages repetition no_repeat_ngram_size=3, # Prevents repeating n-grams (you can adjust the size) temperature=0.8, # Can adjust to make generation less random top_k=50, # Only the top 50 tokens are considered for each prediction top_p=0.95, # Uses nucleus sampling with a cumulative probability of 95% early_stopping=True # Stops generation at the end of a sentence ) ``` Returns ``` Hvordan kan vi udvikle gode open source sprogmodeller i Danmark? <|assistant|> Jeg er ikke en menneskelig, der har personlige synspunkter eller måder at tænke på. men jeg vil gerne hjælpe dig med svar på din spørgsmål: for at udvike gode opensource sprogmodeler i danmark, kan du følgende trin: 1. sammenkoppel den danske forskningsinstitutioner og universiteter til udveksling af ideer og ressourcer. 2. styrke det danske it-industri ved at oprette et netværk mellem virksomheder og akademiske institutioner. 3. skabe en platform for at deltage i internationale projekter og konferencer. 4. anvendte kunstificielle intelligens (ai) til at analyse sproget, sådan som man bruger ai til andre områder. 5. lave workshops og seminarer for at undervise folk om sprogteknikker og metoder. 6. udbygge de eksisterende databaser med danske ord og grammatisk regler. 7. arbejde sammen med nogle store teknologifirmaer, som har interesse i at investere i danske sprogprojektter. 8. udnyttet sociale medier og online kommunikation til at få mere indsigt i danskernes sprogbrud. 9. udvide den dansk sprogbrug til andres lande, således at man kan lave sprogmodels til flere sprog. 10. uddannelse af talentfulde studerende i sprogteknologi, så de kan bidrage til danske projekters succes. hvis disse trin implementeres korrekt, vil det muliggøre en hurtigere udviking af danske open source spragmodeller. ```
TheBloke/claude2-alpaca-7B-GPTQ
TheBloke
2023-11-10T11:06:50Z
11
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:umd-zhou-lab/claude2_alpaca", "base_model:umd-zhou-lab/claude2-alpaca-7B", "base_model:quantized:umd-zhou-lab/claude2-alpaca-7B", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-11-10T10:42:03Z
--- base_model: umd-zhou-lab/claude2-alpaca-7B datasets: - umd-zhou-lab/claude2_alpaca inference: false language: - en license: llama2 model_creator: Tianyi Lab @ UMD model_name: Claude2 Alpaca 7B model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Claude2 Alpaca 7B - GPTQ - Model creator: [Tianyi Lab @ UMD](https://huggingface.co/umd-zhou-lab) - Original model: [Claude2 Alpaca 7B](https://huggingface.co/umd-zhou-lab/claude2-alpaca-7B) <!-- description start --> ## Description This repo contains GPTQ model files for [Tianyi Lab @ UMD's Claude2 Alpaca 7B](https://huggingface.co/umd-zhou-lab/claude2-alpaca-7B). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/claude2-alpaca-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/claude2-alpaca-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF) * [Tianyi Lab @ UMD's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/umd-zhou-lab/claude2-alpaca-7B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/claude2-alpaca-7B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/claude2-alpaca-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/claude2-alpaca-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/claude2-alpaca-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/claude2-alpaca-7B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 7.62 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/claude2-alpaca-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/claude2-alpaca-7B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/claude2-alpaca-7B-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `claude2-alpaca-7B-GPTQ`: ```shell mkdir claude2-alpaca-7B-GPTQ huggingface-cli download TheBloke/claude2-alpaca-7B-GPTQ --local-dir claude2-alpaca-7B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir claude2-alpaca-7B-GPTQ huggingface-cli download TheBloke/claude2-alpaca-7B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir claude2-alpaca-7B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir claude2-alpaca-7B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/claude2-alpaca-7B-GPTQ --local-dir claude2-alpaca-7B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/claude2-alpaca-7B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/claude2-alpaca-7B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/claude2-alpaca-7B-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `claude2-alpaca-7B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/claude2-alpaca-7B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/claude2-alpaca-7B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Tianyi Lab @ UMD's Claude2 Alpaca 7B # Model Card for umd-zhou-lab/claude2-alpaca-7B <!-- Provide a quick summary of what the model is/does. --> This model is trained by fine-tuning llama-2 with claude2 alpaca data. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** UMD Tianyi Zhou Lab - **Model type:** An auto-regressive language model based on the transformer architecture - **License:** Llama 2 Community License Agreement - **Finetuned from model:** [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b) ### Model Sources <!-- Provide the basic links for the model. --> - **GitHub:** [Claude2-Alpaca](https://github.com/Lichang-Chen/claude2-alpaca) - **Data:** [claude2_alpaca](https://huggingface.co/datasets/umd-zhou-lab/claude2_alpaca) ## Uses The primary use of this model is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## Training We use the prompt from [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | | --- | ---: | ---: | ---: | ---: | ---: | | Model (7B) | 128 | 2e-5 | 3 | 4096 | 0 | ## Performance Compared to the llama2-chat, our models can have better average performance.<br> | | Average | ARC | HellaSwag | MMLU | TruthfulQA | Alpaca_Eval | Avg Length | |---|---|---|---|---|---|---|---| | Llama-2-7b-chat | 56.335 | 52.9 | 78.55 | 48.32 | 45.57 | 71.37 | 1479 | | Llama-2-13b-chat | 59.935 | 59.04| 81.94 | 54.64 | 44.12 | 81.09 | 1513 | ||||||||| | claude_alpaca-7b | 57.78 | 56.66 | 81.17 | 46.58 | 46.71 | 71.23 | 1066 | | claude_alpaca-13b | 61.29 | 61.18 | 84.08 | 55.74 | 44.18 | 78.93 | 1127 | ## Citation Please consider citing our paper if you think our codes, data, or models are useful. Thank you! ``` @misc{claude2-alpaca, author = {Lichang Chen and Khalid Saifullah and Ming Li and Tianyi Zhou and Heng Huang}, title = {Claude2-Alpaca: Instruction tuning datasets distilled from claude}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/Lichang-Chen/claude2-alpaca}}, } ```
KalbeDigitalLab/alpara-7b-new
KalbeDigitalLab
2023-11-10T11:03:08Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:yahma/llama-7b-hf", "base_model:adapter:yahma/llama-7b-hf", "region:us" ]
null
2023-11-10T11:03:02Z
--- library_name: peft base_model: yahma/llama-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.2.dev0
dengh/Taxi-v3
dengh
2023-11-10T11:01:15Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-11-10T11:01:09Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.75 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="dengh/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BlueWard/pegasus-x-base-testing-finetune-indosum
BlueWard
2023-11-10T10:58:27Z
42
0
transformers
[ "transformers", "pytorch", "pegasus_x", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-x-base", "base_model:finetune:google/pegasus-x-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-02T02:42:44Z
--- base_model: google/pegasus-x-base tags: - generated_from_trainer model-index: - name: pegasus-x-base-testing-finetune-indosum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-x-base-testing-finetune-indosum This model is a fine-tuned version of [google/pegasus-x-base](https://huggingface.co/google/pegasus-x-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.0 | 1.0 | 35677 | nan | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.0 - Datasets 2.12.0 - Tokenizers 0.13.2
keylazy/Llama-2-7b-chat-hf-ark
keylazy
2023-11-10T10:56:52Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-09T03:22:09Z
--- tags: - generated_from_trainer model-index: - name: Llama-2-7b-chat-hf-ark results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-chat-hf-ark This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.0662 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.825 | 1.0 | 411 | 5.9952 | | 5.5992 | 2.0 | 822 | 4.6529 | | 4.6435 | 3.0 | 1233 | 4.3127 | | 4.1195 | 4.0 | 1645 | 4.1649 | | 3.9117 | 5.0 | 2056 | 4.0878 | | 3.7508 | 6.0 | 2467 | 4.0662 | | 3.5867 | 7.0 | 2878 | 4.1062 | | 3.1731 | 8.0 | 3290 | 4.2059 | | 2.8522 | 9.0 | 3701 | 4.3372 | | 2.6968 | 9.99 | 4110 | 4.3987 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
TheBloke/claude2-alpaca-7B-AWQ
TheBloke
2023-11-10T10:56:03Z
20
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:umd-zhou-lab/claude2_alpaca", "base_model:umd-zhou-lab/claude2-alpaca-7B", "base_model:quantized:umd-zhou-lab/claude2-alpaca-7B", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2023-11-10T10:42:03Z
--- base_model: umd-zhou-lab/claude2-alpaca-7B datasets: - umd-zhou-lab/claude2_alpaca inference: false language: - en license: llama2 model_creator: Tianyi Lab @ UMD model_name: Claude2 Alpaca 7B model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Claude2 Alpaca 7B - AWQ - Model creator: [Tianyi Lab @ UMD](https://huggingface.co/umd-zhou-lab) - Original model: [Claude2 Alpaca 7B](https://huggingface.co/umd-zhou-lab/claude2-alpaca-7B) <!-- description start --> ## Description This repo contains AWQ model files for [Tianyi Lab @ UMD's Claude2 Alpaca 7B](https://huggingface.co/umd-zhou-lab/claude2-alpaca-7B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/claude2-alpaca-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/claude2-alpaca-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/claude2-alpaca-7B-GGUF) * [Tianyi Lab @ UMD's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/umd-zhou-lab/claude2-alpaca-7B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/claude2-alpaca-7B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.89 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/claude2-alpaca-7B-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `claude2-alpaca-7B-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/claude2-alpaca-7B-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/claude2-alpaca-7B-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/claude2-alpaca-7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/claude2-alpaca-7B-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Tianyi Lab @ UMD's Claude2 Alpaca 7B # Model Card for umd-zhou-lab/claude2-alpaca-7B <!-- Provide a quick summary of what the model is/does. --> This model is trained by fine-tuning llama-2 with claude2 alpaca data. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** UMD Tianyi Zhou Lab - **Model type:** An auto-regressive language model based on the transformer architecture - **License:** Llama 2 Community License Agreement - **Finetuned from model:** [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b) ### Model Sources <!-- Provide the basic links for the model. --> - **GitHub:** [Claude2-Alpaca](https://github.com/Lichang-Chen/claude2-alpaca) - **Data:** [claude2_alpaca](https://huggingface.co/datasets/umd-zhou-lab/claude2_alpaca) ## Uses The primary use of this model is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## Training We use the prompt from [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | | --- | ---: | ---: | ---: | ---: | ---: | | Model (7B) | 128 | 2e-5 | 3 | 4096 | 0 | ## Performance Compared to the llama2-chat, our models can have better average performance.<br> | | Average | ARC | HellaSwag | MMLU | TruthfulQA | Alpaca_Eval | Avg Length | |---|---|---|---|---|---|---|---| | Llama-2-7b-chat | 56.335 | 52.9 | 78.55 | 48.32 | 45.57 | 71.37 | 1479 | | Llama-2-13b-chat | 59.935 | 59.04| 81.94 | 54.64 | 44.12 | 81.09 | 1513 | ||||||||| | claude_alpaca-7b | 57.78 | 56.66 | 81.17 | 46.58 | 46.71 | 71.23 | 1066 | | claude_alpaca-13b | 61.29 | 61.18 | 84.08 | 55.74 | 44.18 | 78.93 | 1127 | ## Citation Please consider citing our paper if you think our codes, data, or models are useful. Thank you! ``` @misc{claude2-alpaca, author = {Lichang Chen and Khalid Saifullah and Ming Li and Tianyi Zhou and Heng Huang}, title = {Claude2-Alpaca: Instruction tuning datasets distilled from claude}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/Lichang-Chen/claude2-alpaca}}, } ```
xanore/swin-tiny-patch4-window7-224
xanore
2023-11-10T10:52:57Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-10T06:02:40Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0220 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0963 | 1.0 | 56 | 0.0343 | 0.9875 | | 0.0481 | 1.99 | 112 | 0.0239 | 0.9912 | | 0.0338 | 2.99 | 168 | 0.0220 | 0.9925 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
rishitunu/new_ecc_segformer
rishitunu
2023-11-10T10:52:05Z
6
0
transformers
[ "transformers", "pytorch", "segformer", "image-segmentation", "vision", "generated_from_trainer", "base_model:nvidia/mit-b5", "base_model:finetune:nvidia/mit-b5", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2023-11-08T11:37:04Z
--- license: other base_model: nvidia/mit-b5 tags: - image-segmentation - vision - generated_from_trainer model-index: - name: new_ecc_segformer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # new_ecc_segformer This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the rishitunu/ECC_crackdataset_withsplit dataset. It achieves the following results on the evaluation set: - Loss: 0.0663 - Mean Iou: 0.1943 - Mean Accuracy: 0.3915 - Overall Accuracy: 0.3915 - Accuracy Background: nan - Accuracy Crack: 0.3915 - Iou Background: 0.0 - Iou Crack: 0.3887 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Crack | Iou Background | Iou Crack | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:--------------:|:--------------:|:---------:| | 0.0489 | 1.0 | 438 | 0.0634 | 0.1464 | 0.2933 | 0.2933 | nan | 0.2933 | 0.0 | 0.2929 | | 0.0542 | 2.0 | 876 | 0.0439 | 0.1956 | 0.3917 | 0.3917 | nan | 0.3917 | 0.0 | 0.3912 | | 0.0484 | 3.0 | 1314 | 0.0434 | 0.1719 | 0.3551 | 0.3551 | nan | 0.3551 | 0.0 | 0.3439 | | 0.0539 | 4.0 | 1752 | 0.0447 | 0.1871 | 0.3820 | 0.3820 | nan | 0.3820 | 0.0 | 0.3741 | | 0.0565 | 5.0 | 2190 | 0.0435 | 0.1888 | 0.3937 | 0.3937 | nan | 0.3937 | 0.0 | 0.3777 | | 0.0544 | 6.0 | 2628 | 0.0442 | 0.1904 | 0.3930 | 0.3930 | nan | 0.3930 | 0.0 | 0.3808 | | 0.0421 | 7.0 | 3066 | 0.0449 | 0.2256 | 0.4651 | 0.4651 | nan | 0.4651 | 0.0 | 0.4513 | | 0.0352 | 8.0 | 3504 | 0.0587 | 0.1569 | 0.3165 | 0.3165 | nan | 0.3165 | 0.0 | 0.3138 | | 0.0394 | 9.0 | 3942 | 0.0442 | 0.1842 | 0.3710 | 0.3710 | nan | 0.3710 | 0.0 | 0.3684 | | 0.0445 | 10.0 | 4380 | 0.0609 | 0.1167 | 0.4173 | 0.4173 | nan | 0.4173 | 0.0 | 0.2334 | | 0.0503 | 11.0 | 4818 | 0.0504 | 0.1702 | 0.3714 | 0.3714 | nan | 0.3714 | 0.0 | 0.3403 | | 0.0379 | 12.0 | 5256 | 0.0460 | 0.1903 | 0.3869 | 0.3869 | nan | 0.3869 | 0.0 | 0.3807 | | 0.0405 | 13.0 | 5694 | 0.0452 | 0.2017 | 0.4084 | 0.4084 | nan | 0.4084 | 0.0 | 0.4034 | | 0.0367 | 14.0 | 6132 | 0.0477 | 0.1995 | 0.4060 | 0.4060 | nan | 0.4060 | 0.0 | 0.3990 | | 0.0315 | 15.0 | 6570 | 0.0498 | 0.2073 | 0.4208 | 0.4208 | nan | 0.4208 | 0.0 | 0.4147 | | 0.0244 | 16.0 | 7008 | 0.0486 | 0.1963 | 0.4029 | 0.4029 | nan | 0.4029 | 0.0 | 0.3926 | | 0.031 | 17.0 | 7446 | 0.0568 | 0.1927 | 0.3892 | 0.3892 | nan | 0.3892 | 0.0 | 0.3855 | | 0.0288 | 18.0 | 7884 | 0.0560 | 0.2033 | 0.4092 | 0.4092 | nan | 0.4092 | 0.0 | 0.4067 | | 0.0354 | 19.0 | 8322 | 0.0613 | 0.2007 | 0.4056 | 0.4056 | nan | 0.4056 | 0.0 | 0.4013 | | 0.0315 | 20.0 | 8760 | 0.0605 | 0.1865 | 0.3752 | 0.3752 | nan | 0.3752 | 0.0 | 0.3731 | | 0.0343 | 21.0 | 9198 | 0.0653 | 0.1991 | 0.4019 | 0.4019 | nan | 0.4019 | 0.0 | 0.3981 | | 0.0327 | 22.0 | 9636 | 0.0660 | 0.1945 | 0.3924 | 0.3924 | nan | 0.3924 | 0.0 | 0.3891 | | 0.0252 | 22.83 | 10000 | 0.0663 | 0.1943 | 0.3915 | 0.3915 | nan | 0.3915 | 0.0 | 0.3887 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cpu - Datasets 2.14.6 - Tokenizers 0.14.1
RomanTeucher/Reinforce-unit_4_model
RomanTeucher
2023-11-10T10:43:04Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-11-10T10:42:54Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-unit_4_model results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
bradmin/reward-bert-duplicate-answer-2
bradmin
2023-11-10T10:41:49Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:klue/roberta-large", "base_model:finetune:klue/roberta-large", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-10T07:32:56Z
--- base_model: klue/roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: reward-bert-duplicate-answer-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reward-bert-duplicate-answer-2 This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4506 - Accuracy: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 2023 - gradient_accumulation_steps: 10 - total_train_batch_size: 60 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7099 | 0.26 | 100 | 0.6931 | 1.0 | | 0.6983 | 0.53 | 200 | 0.6912 | 0.0 | | 0.4911 | 0.79 | 300 | 0.4506 | 0.0 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
TheBloke/claude2-alpaca-13B-GPTQ
TheBloke
2023-11-10T10:40:55Z
25
6
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:umd-zhou-lab/claude2_alpaca", "base_model:umd-zhou-lab/claude2-alpaca-13B", "base_model:quantized:umd-zhou-lab/claude2-alpaca-13B", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-11-10T09:49:09Z
--- base_model: umd-zhou-lab/claude2-alpaca-13B datasets: - umd-zhou-lab/claude2_alpaca inference: false language: - en license: llama2 model_creator: Tianyi Lab @ UMD model_name: Claude2 Alpaca 13B model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Claude2 Alpaca 13B - GPTQ - Model creator: [Tianyi Lab @ UMD](https://huggingface.co/umd-zhou-lab) - Original model: [Claude2 Alpaca 13B](https://huggingface.co/umd-zhou-lab/claude2-alpaca-13B) <!-- description start --> ## Description This repo contains GPTQ model files for [Tianyi Lab @ UMD's Claude2 Alpaca 13B](https://huggingface.co/umd-zhou-lab/claude2-alpaca-13B). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/claude2-alpaca-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/claude2-alpaca-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/claude2-alpaca-13B-GGUF) * [Tianyi Lab @ UMD's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/umd-zhou-lab/claude2-alpaca-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/claude2-alpaca-13B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/claude2-alpaca-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/claude2-alpaca-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/claude2-alpaca-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/claude2-alpaca-13B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 14.55 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/claude2-alpaca-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/claude2-alpaca-13B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/claude2-alpaca-13B-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `claude2-alpaca-13B-GPTQ`: ```shell mkdir claude2-alpaca-13B-GPTQ huggingface-cli download TheBloke/claude2-alpaca-13B-GPTQ --local-dir claude2-alpaca-13B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir claude2-alpaca-13B-GPTQ huggingface-cli download TheBloke/claude2-alpaca-13B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir claude2-alpaca-13B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir claude2-alpaca-13B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/claude2-alpaca-13B-GPTQ --local-dir claude2-alpaca-13B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/claude2-alpaca-13B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/claude2-alpaca-13B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/claude2-alpaca-13B-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `claude2-alpaca-13B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/claude2-alpaca-13B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/claude2-alpaca-13B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Tianyi Lab @ UMD's Claude2 Alpaca 13B # Model Card for umd-zhou-lab/claude2-alpaca-13B <!-- Provide a quick summary of what the model is/does. --> This model is trained by fine-tuning llama-2 with claude2 alpaca data. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** UMD Tianyi Zhou Lab - **Model type:** An auto-regressive language model based on the transformer architecture - **License:** Llama 2 Community License Agreement - **Finetuned from model:** [meta-llama/Llama-2-13b](https://huggingface.co/meta-llama/Llama-2-13b) ### Model Sources <!-- Provide the basic links for the model. --> - **GitHub:** [Claude2-Alpaca](https://github.com/Lichang-Chen/claude2-alpaca) - **Data:** [claude2_alpaca](https://huggingface.co/datasets/umd-zhou-lab/claude2_alpaca) ## Uses The primary use of this model is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## Training We use the prompt from [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | | --- | ---: | ---: | ---: | ---: | ---: | | Model (13B) | 128 | 1e-5 | 5 | 2048 | 0 | ## Performance Compared to the llama2-chat, our models can have better average performance.<br> | | Average | ARC | HellaSwag | MMLU | TruthfulQA | Alpaca_Eval | Avg Length | |---|---|---|---|---|---|---|---| | Llama-2-7b-chat | 56.335 | 52.9 | 78.55 | 48.32 | 45.57 | 71.37 | 1479 | | Llama-2-13b-chat | 59.935 | 59.04| 81.94 | 54.64 | 44.12 | 81.09 | 1513 | ||||||||| | claude_alpaca-7b | 57.78 | 56.66 | 81.17 | 46.58 | 46.71 | 71.23 | 1066 | | claude_alpaca-13b | 61.29 | 61.18 | 84.08 | 55.74 | 44.18 | 78.93 | 1127 | ## Citation Please consider citing our paper if you think our codes, data, or models are useful. Thank you! ``` @misc{claude2-alpaca, author = {Lichang Chen and Khalid Saifullah and Ming Li and Tianyi Zhou and Heng Huang}, title = {Claude2-Alpaca: Instruction tuning datasets distilled from claude}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/Lichang-Chen/claude2-alpaca}}, } ```
KyriaAnnwyn/lora-trained-RachelMcPherson_RV2_novae_long-xl
KyriaAnnwyn
2023-11-10T10:34:03Z
1
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:SG161222/RealVisXL_V2.0", "base_model:adapter:SG161222/RealVisXL_V2.0", "license:openrail++", "region:us" ]
text-to-image
2023-11-10T10:07:01Z
--- license: openrail++ base_model: SG161222/RealVisXL_V2.0 instance_prompt: a photo of RachelMcPherson tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - KyriaAnnwyn/lora-trained-RachelMcPherson_RV2_novae_long-xl These are LoRA adaption weights for SG161222/RealVisXL_V2.0. The weights were trained on a photo of RachelMcPherson using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: True. Special VAE used for training: None.
AntoineD/MiniLM_classification_tools_fr
AntoineD
2023-11-10T10:17:05Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:microsoft/Multilingual-MiniLM-L12-H384", "base_model:finetune:microsoft/Multilingual-MiniLM-L12-H384", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-10T10:06:08Z
--- license: mit base_model: microsoft/Multilingual-MiniLM-L12-H384 tags: - generated_from_trainer metrics: - accuracy model-index: - name: MiniLM_classification_tools_fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MiniLM_classification_tools_fr This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7694 - Accuracy: 0.75 - Learning Rate: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 24 - eval_batch_size: 192 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Rate | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 7 | 2.0620 | 0.35 | 0.0001 | | No log | 2.0 | 14 | 1.9515 | 0.425 | 0.0001 | | No log | 3.0 | 21 | 1.7736 | 0.45 | 0.0001 | | No log | 4.0 | 28 | 1.6055 | 0.475 | 0.0001 | | No log | 5.0 | 35 | 1.5108 | 0.5 | 0.0001 | | No log | 6.0 | 42 | 1.4074 | 0.45 | 9e-05 | | No log | 7.0 | 49 | 1.3848 | 0.475 | 0.0001 | | No log | 8.0 | 56 | 1.2533 | 0.625 | 0.0001 | | No log | 9.0 | 63 | 1.2463 | 0.525 | 0.0001 | | No log | 10.0 | 70 | 1.1593 | 0.6 | 0.0001 | | No log | 11.0 | 77 | 1.1637 | 0.6 | 0.0001 | | No log | 12.0 | 84 | 1.0900 | 0.625 | 8e-05 | | No log | 13.0 | 91 | 0.9577 | 0.7 | 0.0001 | | No log | 14.0 | 98 | 0.9465 | 0.675 | 0.0001 | | No log | 15.0 | 105 | 0.9255 | 0.675 | 0.0001 | | No log | 16.0 | 112 | 0.8836 | 0.675 | 0.0001 | | No log | 17.0 | 119 | 0.8307 | 0.675 | 0.0001 | | No log | 18.0 | 126 | 0.8335 | 0.725 | 7e-05 | | No log | 19.0 | 133 | 0.8469 | 0.625 | 0.0001 | | No log | 20.0 | 140 | 0.7384 | 0.75 | 0.0001 | | No log | 21.0 | 147 | 0.7330 | 0.775 | 0.0001 | | No log | 22.0 | 154 | 0.7811 | 0.775 | 0.0001 | | No log | 23.0 | 161 | 0.6857 | 0.8 | 0.0001 | | No log | 24.0 | 168 | 0.6733 | 0.825 | 6e-05 | | No log | 25.0 | 175 | 0.6510 | 0.85 | 0.0001 | | No log | 26.0 | 182 | 0.6363 | 0.85 | 0.0001 | | No log | 27.0 | 189 | 0.6101 | 0.875 | 0.0001 | | No log | 28.0 | 196 | 0.6434 | 0.8 | 0.0001 | | No log | 29.0 | 203 | 0.6644 | 0.775 | 0.0001 | | No log | 30.0 | 210 | 0.7162 | 0.75 | 5e-05 | | No log | 31.0 | 217 | 0.7422 | 0.775 | 0.0000 | | No log | 32.0 | 224 | 0.7120 | 0.775 | 0.0000 | | No log | 33.0 | 231 | 0.6296 | 0.8 | 0.0000 | | No log | 34.0 | 238 | 0.6522 | 0.775 | 0.0000 | | No log | 35.0 | 245 | 0.7636 | 0.75 | 0.0000 | | No log | 36.0 | 252 | 0.7703 | 0.75 | 4e-05 | | No log | 37.0 | 259 | 0.7694 | 0.75 | 0.0000 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.14.1
suno/bark-small
suno
2023-11-10T10:11:12Z
26,353
192
transformers
[ "transformers", "pytorch", "bark", "text-to-audio", "audio", "text-to-speech", "en", "de", "es", "fr", "hi", "it", "ja", "ko", "pl", "pt", "ru", "tr", "zh", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
2023-07-18T13:50:46Z
--- language: - en - de - es - fr - hi - it - ja - ko - pl - pt - ru - tr - zh thumbnail: >- https://user-images.githubusercontent.com/5068315/230698495-cbb1ced9-c911-4c9a-941d-a1a4a1286ac6.png library: bark license: mit tags: - bark - audio - text-to-speech duplicated_from: ylacombe/bark-small pipeline_tag: text-to-speech --- # Bark Bark is a transformer-based text-to-audio model created by [Suno](https://www.suno.ai). Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications like laughing, sighing and crying. To support the research community, we are providing access to pretrained model checkpoints ready for inference. The original github repo and model card can be found [here](https://github.com/suno-ai/bark). This model is meant for research purposes only. The model output is not censored and the authors do not endorse the opinions in the generated content. Use at your own risk. Two checkpoints are released: - [**small** (this checkpoint)](https://huggingface.co/suno/bark-small) - [large](https://huggingface.co/suno/bark) ## Example Try out Bark yourself! * Bark Colab: <a target="_blank" href="https://colab.research.google.com/drive/1eJfA2XUa-mXwdMy7DoYKVYHI1iTd9Vkt?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Colab: <a target="_blank" href="https://colab.research.google.com/drive/1dWWkZzvu7L9Bunq9zvD-W02RFUXoW-Pd?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Demo: <a target="_blank" href="https://huggingface.co/spaces/suno/bark"> <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/> </a> ## 🤗 Transformers Usage You can run Bark locally with the 🤗 Transformers library from version 4.31.0 onwards. 1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy: ``` pip install --upgrade pip pip install --upgrade transformers scipy ``` 2. Run inference via the `Text-to-Speech` (TTS) pipeline. You can infer the bark model via the TTS pipeline in just a few lines of code! ```python from transformers import pipeline import scipy synthesiser = pipeline("text-to-speech", "suno/bark-small") speech = synthesiser("Hello, my dog is cooler than you!", forward_params={"do_sample": True}) scipy.io.wavfile.write("bark_out.wav", rate=speech["sampling_rate"], data=speech["audio"]) ``` 3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 24 kHz speech waveform for more fine-grained control. ```python from transformers import AutoProcessor, AutoModel processor = AutoProcessor.from_pretrained("suno/bark-small") model = AutoModel.from_pretrained("suno/bark-small") inputs = processor( text=["Hello, my name is Suno. And, uh — and I like pizza. [laughs] But I also have other interests such as playing tic tac toe."], return_tensors="pt", ) speech_values = model.generate(**inputs, do_sample=True) ``` 4. Listen to the speech samples either in an ipynb notebook: ```python from IPython.display import Audio sampling_rate = model.generation_config.sample_rate Audio(speech_values.cpu().numpy().squeeze(), rate=sampling_rate) ``` Or save them as a `.wav` file using a third-party library, e.g. `scipy`: ```python import scipy sampling_rate = model.config.sample_rate scipy.io.wavfile.write("bark_out.wav", rate=sampling_rate, data=speech_values.cpu().numpy().squeeze()) ``` For more details on using the Bark model for inference using the 🤗 Transformers library, refer to the [Bark docs](https://huggingface.co/docs/transformers/model_doc/bark). ### Optimization tips Refers to this [blog post](https://huggingface.co/blog/optimizing-bark#benchmark-results) to find out more about the following methods and a benchmark of their benefits. #### Get significant speed-ups: **Using 🤗 Better Transformer** Better Transformer is an 🤗 Optimum feature that performs kernel fusion under the hood. You can gain 20% to 30% in speed with zero performance degradation. It only requires one line of code to export the model to 🤗 Better Transformer: ```python model = model.to_bettertransformer() ``` Note that 🤗 Optimum must be installed before using this feature. [Here's how to install it.](https://huggingface.co/docs/optimum/installation) **Using Flash Attention 2** Flash Attention 2 is an even faster, optimized version of the previous optimization. ```python model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16, use_flash_attention_2=True).to(device) ``` Make sure to load your model in half-precision (e.g. `torch.float16``) and to [install](https://github.com/Dao-AILab/flash-attention#installation-and-features) the latest version of Flash Attention 2. **Note:** Flash Attention 2 is only available on newer GPUs, refer to 🤗 Better Transformer in case your GPU don't support it. #### Reduce memory footprint: **Using half-precision** You can speed up inference and reduce memory footprint by 50% simply by loading the model in half-precision (e.g. `torch.float16``). **Using CPU offload** Bark is made up of 4 sub-models, which are called up sequentially during audio generation. In other words, while one sub-model is in use, the other sub-models are idle. If you're using a CUDA device, a simple solution to benefit from an 80% reduction in memory footprint is to offload the GPU's submodels when they're idle. This operation is called CPU offloading. You can use it with one line of code. ```python model.enable_cpu_offload() ``` Note that 🤗 Accelerate must be installed before using this feature. [Here's how to install it.](https://huggingface.co/docs/accelerate/basic_tutorials/install) ## Suno Usage You can also run Bark locally through the original [Bark library]((https://github.com/suno-ai/bark): 1. First install the [`bark` library](https://github.com/suno-ai/bark) 3. Run the following Python code: ```python from bark import SAMPLE_RATE, generate_audio, preload_models from IPython.display import Audio # download and load all models preload_models() # generate audio from text text_prompt = """ Hello, my name is Suno. And, uh — and I like pizza. [laughs] But I also have other interests such as playing tic tac toe. """ speech_array = generate_audio(text_prompt) # play text in notebook Audio(speech_array, rate=SAMPLE_RATE) ``` [pizza.webm](https://user-images.githubusercontent.com/5068315/230490503-417e688d-5115-4eee-9550-b46a2b465ee3.webm) To save `audio_array` as a WAV file: ```python from scipy.io.wavfile import write as write_wav write_wav("/path/to/audio.wav", SAMPLE_RATE, audio_array) ``` ## Model Details The following is additional information about the models released here. Bark is a series of three transformer models that turn text into audio. ### Text to semantic tokens - Input: text, tokenized with [BERT tokenizer from Hugging Face](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer) - Output: semantic tokens that encode the audio to be generated ### Semantic to coarse tokens - Input: semantic tokens - Output: tokens from the first two codebooks of the [EnCodec Codec](https://github.com/facebookresearch/encodec) from facebook ### Coarse to fine tokens - Input: the first two codebooks from EnCodec - Output: 8 codebooks from EnCodec ### Architecture | Model | Parameters | Attention | Output Vocab size | |:-------------------------:|:----------:|------------|:-----------------:| | Text to semantic tokens | 80/300 M | Causal | 10,000 | | Semantic to coarse tokens | 80/300 M | Causal | 2x 1,024 | | Coarse to fine tokens | 80/300 M | Non-causal | 6x 1,024 | ### Release date April 2023 ## Broader Implications We anticipate that this model's text to audio capabilities can be used to improve accessbility tools in a variety of languages. While we hope that this release will enable users to express their creativity and build applications that are a force for good, we acknowledge that any text to audio model has the potential for dual use. While it is not straightforward to voice clone known people with Bark, it can still be used for nefarious purposes. To further reduce the chances of unintended use of Bark, we also release a simple classifier to detect Bark-generated audio with high accuracy (see notebooks section of the main repository). ## License Bark is licensed under the [MIT License](https://github.com/suno-ai/bark/blob/main/LICENSE), meaning it's available for commercial use.
logits/llama2-7b-book-qa
logits
2023-11-10T09:55:52Z
0
0
null
[ "region:us" ]
null
2023-11-10T09:48:36Z
```python from transformers import LlamaForCausalLM, LlamaTokenizer from peft import PeftModelForCausalLM base_model = LlamaForCausalLM.from_pretrained("your/llama/path") tokenizer = LlamaTokenizer.from_pretrained("your/llama/path") llama_lora = PeftModelForCausalLM.from_pretrained(base_model, "logits/llama2-7b-book-qa") prompt = "your prompt" input_ids = tokenizer(prompt).input_ids.cuda() output_ids = llama_lora(input_ids=input_ids) output = tokenizer.decode(output_ids) print(output) ```
baskotayunisha/NFT
baskotayunisha
2023-11-10T09:51:01Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-10T09:50:33Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer model-index: - name: NFT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NFT This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
ntmkhanh/recipe
ntmkhanh
2023-11-10T09:50:29Z
4
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "generated_from_trainer", "base_model:vinai/bartpho-word-base", "base_model:finetune:vinai/bartpho-word-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-10T09:40:57Z
--- base_model: vinai/bartpho-word-base tags: - generated_from_trainer model-index: - name: eval_bartpho_final results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eval_bartpho_final This model is a fine-tuned version of [vinai/bartpho-word-base](https://huggingface.co/vinai/bartpho-word-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20000 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
Brololo/unit2
Brololo
2023-11-10T09:44:29Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-11-10T09:41:17Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: unit2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Brololo/unit2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AntoineD/camembert_ccnet_classification_tools_fr
AntoineD
2023-11-10T09:41:36Z
3
0
transformers
[ "transformers", "pytorch", "camembert", "text-classification", "generated_from_trainer", "base_model:almanach/camembert-base-ccnet", "base_model:finetune:almanach/camembert-base-ccnet", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-09T14:43:08Z
--- base_model: camembert/camembert-base-ccnet tags: - generated_from_trainer metrics: - accuracy model-index: - name: camembert_ccnet_classification_tools_fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # camembert_ccnet_classification_tools_fr This model is a fine-tuned version of [camembert/camembert-base-ccnet](https://huggingface.co/camembert/camembert-base-ccnet) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5125 - Accuracy: 0.9 - Learning Rate: 0.0001 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 24 - eval_batch_size: 192 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Rate | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 7 | 1.8894 | 0.525 | 0.0001 | | No log | 2.0 | 14 | 1.4269 | 0.675 | 0.0001 | | No log | 3.0 | 21 | 1.1038 | 0.75 | 0.0001 | | No log | 4.0 | 28 | 0.8014 | 0.85 | 0.0001 | | No log | 5.0 | 35 | 0.6406 | 0.85 | 0.0001 | | No log | 6.0 | 42 | 0.6220 | 0.875 | 9e-05 | | No log | 7.0 | 49 | 0.4642 | 0.875 | 0.0001 | | No log | 8.0 | 56 | 0.5596 | 0.85 | 0.0001 | | No log | 9.0 | 63 | 0.5648 | 0.85 | 0.0001 | | No log | 10.0 | 70 | 0.5025 | 0.9 | 0.0001 | | No log | 11.0 | 77 | 0.5263 | 0.9 | 0.0001 | | No log | 12.0 | 84 | 0.5062 | 0.9 | 8e-05 | | No log | 13.0 | 91 | 0.4950 | 0.9 | 0.0001 | | No log | 14.0 | 98 | 0.4981 | 0.9 | 0.0001 | | No log | 15.0 | 105 | 0.5036 | 0.9 | 0.0001 | | No log | 16.0 | 112 | 0.5075 | 0.9 | 0.0001 | | No log | 17.0 | 119 | 0.5125 | 0.9 | 0.0001 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.14.1
hariduraibaskar/ppo-Huggy
hariduraibaskar
2023-11-10T09:36:23Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-11-10T09:36:15Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: hariduraibaskar/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
SuperDan/ppo-Huggy
SuperDan
2023-11-10T09:33:15Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-11-10T09:33:11Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: SuperDan/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
JLinda/sd-class-butterflies-32
JLinda
2023-11-10T09:30:10Z
1
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-11-10T09:29:48Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('JLinda/sd-class-butterflies-32') image = pipeline().images[0] image ```
GTsky/t5-base-finetuned-multi-oe-full
GTsky
2023-11-10T09:29:46Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-10T09:29:21Z
--- license: apache-2.0 base_model: t5-base tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-base-finetuned-multi-oe-full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-finetuned-multi-oe-full This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2883 - Rouge1: 59.9814 - Rouge2: 51.5747 - Rougel: 59.4429 - Rougelsum: 59.4001 - Gen Len: 10.6632 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.8355 | 1.0 | 591 | 0.3871 | 51.195 | 41.4443 | 50.5766 | 50.5419 | 11.04 | | 0.3465 | 2.0 | 1182 | 0.3003 | 57.5018 | 48.6629 | 56.9622 | 56.8835 | 10.767 | | 0.2252 | 3.0 | 1773 | 0.2883 | 59.9814 | 51.5747 | 59.4429 | 59.4001 | 10.6632 | ### Framework versions - Transformers 4.35.0 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.14.1
OpenNLPLab/HGRN-1B
OpenNLPLab
2023-11-10T09:27:49Z
19
8
transformers
[ "transformers", "pytorch", "hgrn", "text-generation", "HGRN", "Recurrent Neural Network", "custom_code", "en", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2023-11-10T08:05:27Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - HGRN - Recurrent Neural Network --- <div align="center"> <h1> HGRN - Hierarchically Gated Recurrent Neural Network for Sequence Modeling </h1> </div> <p align="center"> 💻 <a href="https://github.com/OpenNLPLab/HGRN" target="_blank">GitHub </a> </p> - [Overall Architecture](#overall-architecture) - [Experiments](#experiments) - [Environment Preparation](#environment-preparation) - [Env1](#env1) - [Env2](#env2) - [Autoregressive language model](#autoregressive-language-model) - [1) Preprocess the data](#1-preprocess-the-data) - [2) Train the autoregressive language model](#2-train-the-autoregressive-language-model) - [Image modeling](#image-modeling) - [LRA](#lra) - [1) Preparation](#1-preparation) - [2) Training](#2-training) - [Standalone code](#standalone-code) ## Overall Architecture The overall network architecture is as follows: <div align="center"> <img src="./hgrn.png" width = "100%" height = "100%" alt="network" align=center /></div> ## Experiments ### Environment Preparation Our experiment uses two conda environments, where Autoregressive language modeling, needs to configure the environment according to the Env1 part, and LRA needs to configure the environment according to the Env2 part. #### Env1 First build the conda environment based on the yaml file: ``` conda env create --file env1.yaml ``` If you meet an error when installing torch, just remove torch and torchvision in the yaml file, rerun the above command, and then run the below commands: ``` conda activate hgrn wget https://download.pytorch.org/whl/cu111/torch-1.8.1%2Bcu111-cp36-cp36m-linux_x86_64.whl pip install torch-1.8.1+cu111-cp36-cp36m-linux_x86_64.whl pip install -r requirements_hgrn.txt ``` Then, install `hgru-pytorch`: ``` conda activate hgrn cd hgru-pytorch pip install . ``` Finally, install our version of fairseq: ``` cd fairseq pip install --editable ./ ``` #### Env2 Build the conda environment based on the yaml file: ``` conda env create --file env2.yaml ``` If you encounter difficulties in setting up the environment, you can install the conda environment first, and then use the following command to install the pip packages: ``` pip install torch==1.10.0+cu111 torchvision==0.11.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html pip install -r requirements_lra.txt ``` Finally, install `hgru-pytorch`: ``` conda activate lra cd hgru-pytorch pip install . ``` ### Autoregressive language model #### 1) Preprocess the data First download the [WikiText-103 dataset](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/): ``` wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip unzip wikitext-103-raw-v1.zip ``` Next, encode it with the GPT-2 BPE: ``` mkdir -p gpt2_bpe wget -O gpt2_bpe/encoder.json https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json wget -O gpt2_bpe/vocab.bpe https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe for SPLIT in train valid test; do \ python -m examples.roberta.multiprocessing_bpe_encoder \ --encoder-json gpt2_bpe/encoder.json \ --vocab-bpe gpt2_bpe/vocab.bpe \ --inputs wikitext-103-raw/wiki.${SPLIT}.raw \ --outputs wikitext-103-raw/wiki.${SPLIT}.bpe \ --keep-empty \ --workers 60; \ done ``` Finally, preprocess/binarize the data using the GPT-2 fairseq dictionary: ``` wget -O gpt2_bpe/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt fairseq-preprocess \ --only-source \ --srcdict gpt2_bpe/dict.txt \ --trainpref wikitext-103-raw/wiki.train.bpe \ --validpref wikitext-103-raw/wiki.valid.bpe \ --testpref wikitext-103-raw/wiki.test.bpe \ --destdir data-bin/wikitext-103 \ --workers 60 ``` This step comes from [fairseq](https://github.com/facebookresearch/fairseq/blob/main/examples/roberta/README.pretraining.md). #### 2) Train the autoregressive language model Use the following command to train language model: ``` bash script_alm.sh ``` You should change data_dir to preprocessed data. ### Image modeling ``` bash script_im.sh ``` ### LRA #### 1) Preparation Download the codebase: ``` git clone https://github.com/OpenNLPLab/lra.git ``` Download the data: ``` wget https://storage.googleapis.com/long-range-arena/lra_release.gz mv lra_release.gz lra_release.tar.gz tar -xvf lra_release.tar.gz ``` #### 2) Training Use the following script to run the experiments, you should change `PREFIX` to your lra path, change `tasks` to a specific task: ``` python script_lra.py ``` ## Standalone code See [hgru-pytorch](https://github.com/Doraemonzzz/hgru-pytorch).
kch-chaihong/followyourpose-charades
kch-chaihong
2023-11-10T09:25:02Z
0
0
diffusers
[ "diffusers", "region:us" ]
null
2023-10-15T10:06:36Z
## Model Details The model in this repository is fine-tuned based on the charades datasets with different human actions. Each model in the folder is a checkpoint for the fine-tuned process. ### Model Metadata --- id: - `Unique identifier for each video` - 0MK2C subject: - `Unique identifier for each subject in the dataset` - DXDI scene: - `One of the 15 indoor scenes in the dataset` - Stairs quality: - `Quality of the video judged by an annotator (7-point scale, 7 = high quality)` - 7 relevance: - `Relevance of the video to the script judged by an annotator (7-point scale, 7 = very relevant)` - 7 verified: - `Yes - if an annotator successfully verified that the video matches the script, else No` - "Yes" script: - `The human-generated script used to generate the video` - "A person is running up the stairs holding a pair of shoes. The person goes through a door." objects: - `List of objects identified in the video` - ["doorway", "shoe", "stairs"] descriptions: - `List of descriptions by annotators watching the video` - "A person runs up the stairs carrying a pair of shoes and opens a door." actions: - `Consists of actions - human actions in the video and timings - timing of the action happening in the video` - actions: [{ action: "run up the stairs", timing: ["1.50", "8.80"] }] --- - **Developed by:** ICT3104-Team06-2023 - **Finetuned from Model:** stable-diffusion-v1-4 - **Stable Diffusion Model Repository:** https://huggingface.co/YueMafighting/FollowYourPose_v1 - **Training Data Source:** https://prior.allenai.org/projects/charades
rwood-97/sam_os_counties
rwood-97
2023-11-10T09:17:20Z
12
0
transformers
[ "transformers", "pytorch", "sam", "mask-generation", "maps", "image-segmentation", "dataset:rwood-97/os_counties", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-segmentation
2023-11-07T15:35:41Z
--- license: apache-2.0 datasets: - rwood-97/os_counties pipeline_tag: image-segmentation tags: - maps ---
stevenn01/opus-mt-en-ro-finetuned-en-to-ro
stevenn01
2023-11-10T09:12:03Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "dataset:wmt16", "base_model:Helsinki-NLP/opus-mt-en-ro", "base_model:finetune:Helsinki-NLP/opus-mt-en-ro", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-10T07:36:08Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-ro tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: opus-mt-en-ro-finetuned-en-to-ro results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16 type: wmt16 config: ro-en split: validation args: ro-en metrics: - name: Bleu type: bleu value: 28.1774 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ro-finetuned-en-to-ro This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.2902 - Bleu: 28.1774 - Gen Len: 34.0885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.7412 | 1.0 | 38145 | 1.2902 | 28.1774 | 34.0885 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
thingthatis/lcm-sdxl
thingthatis
2023-11-10T09:02:20Z
6
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "arxiv:2310.04378", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-11-10T09:02:19Z
--- library_name: diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 tags: - text-to-image license: openrail++ inference: false --- # Latent Consistency Model (LCM): SDXL Latent Consistency Model (LCM) was proposed in [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https://arxiv.org/abs/2310.04378) by *Simian Luo, Yiqin Tan et al.* and [Simian Luo](https://huggingface.co/SimianLuo), [Suraj Patil](https://huggingface.co/valhalla), and [Daniel Gu](https://huggingface.co/dg845) succesfully applied the same approach to create LCM for SDXL. This checkpoint is a LCM distilled version of [`stable-diffusion-xl-base-1.0`](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) that allows to reduce the number of inference steps to only between **2 - 8 steps**. ## Usage LCM SDXL is supported in 🤗 Hugging Face Diffusers library from version v0.23.0 onwards. To run the model, first install the latest version of the Diffusers library as well as `peft`, `accelerate` and `transformers`. audio dataset from the Hugging Face Hub: ```bash pip install --upgrade pip pip install --upgrade diffusers transformers accelerate peft ``` ### Text-to-Image The model can be loaded with it's base pipeline `stabilityai/stable-diffusion-xl-base-1.0`. Next, the scheduler needs to be changed to [`LCMScheduler`](https://huggingface.co/docs/diffusers/v0.22.3/en/api/schedulers/lcm#diffusers.LCMScheduler) and we can reduce the number of inference steps to just 2 to 8 steps. Please make sure to either disable `guidance_scale` or use values between 1.0 and 2.0. ```python from diffusers import UNet2DConditionModel, DiffusionPipeline, LCMScheduler unet = UNet2DConditionModel.from_pretrained("latent-consistency/lcm-sdxl", torch_dtype=torch.float16, variant="fp16") pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16") pipe.scheduler = LCMScheduler.from_config(sd_pipe.scheduler.config) pipe.to("cuda") prompt = "a close-up picture of an old man standing in the rain" image = pipe(prompt, num_inference_steps=4, guidance_scale=8.0).images[0] ``` ![](./image.png) ### Image-to-Image Works as well! TODO docs ### Inpainting Works as well! TODO docs ### ControlNet Works as well! TODO docs ### T2I Adapter Works as well! TODO docs ## Speed Benchmark TODO ## Training TODO
nicotaroni/sentiment_analysis_extended_v2
nicotaroni
2023-11-10T09:00:50Z
7
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-11-10T09:00:00Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # nicotaroni/sentiment_analysis_extended_v2 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("nicotaroni/sentiment_analysis_extended_v2") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
tommylam/PPO-snowballTarget
tommylam
2023-11-10T08:52:48Z
11
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-11-10T08:52:45Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: tommylam/PPO-snowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
lichengqian/ppo-LunarLander-v2
lichengqian
2023-11-10T08:36:16Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-10T08:35:53Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.95 +/- 22.94 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```