modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-25 18:28:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
495 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-25 18:28:16
card
stringlengths
11
1.01M
kotoba-tech/kotoba-whisper-v1.1
kotoba-tech
2024-10-23T00:48:06Z
1,012
32
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "audio", "hf-asr-leaderboard", "ja", "dataset:japanese-asr/whisper_transcriptions.reazonspeech.large", "dataset:japanese-asr/whisper_transcriptions.reazonspeech.large.wer_10.0", "dataset:japanese-asr/whisper_transcriptions.reazonspeech.large.wer_10.0.vectorized", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-04-29T14:53:45Z
--- language: ja library_name: transformers license: apache-2.0 tags: - audio - automatic-speech-recognition - hf-asr-leaderboard widget: - example_title: CommonVoice 8.0 (Test Split) src: >- https://huggingface.co/datasets/japanese-asr/ja_asr.common_voice_8_0/resolve/main/sample.flac - example_title: JSUT Basic 5000 src: >- https://huggingface.co/datasets/japanese-asr/ja_asr.jsut_basic5000/resolve/main/sample.flac - example_title: ReazonSpeech (Test Split) src: >- https://huggingface.co/datasets/japanese-asr/ja_asr.reazonspeech_test/resolve/main/sample.flac pipeline_tag: automatic-speech-recognition datasets: - japanese-asr/whisper_transcriptions.reazonspeech.large - japanese-asr/whisper_transcriptions.reazonspeech.large.wer_10.0 - japanese-asr/whisper_transcriptions.reazonspeech.large.wer_10.0.vectorized --- # Kotoba-Whisper-v1.1 _Kotoba-Whisper-v1.1_ is a Japanese ASR model based on [kotoba-tech/kotoba-whisper-v1.0](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0), with additional postprocessing stacks integrated as [`pipeline`](https://huggingface.co/docs/transformers/en/main_classes/pipelines). The new features includes adding punctuation with [punctuators](https://github.com/1-800-BAD-CODE/punctuators/tree/main). These libraries are merged into Kotoba-Whisper-v1.1 via pipeline and will be applied seamlessly to the predicted transcription from [kotoba-tech/kotoba-whisper-v1.0](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0). The pipeline has been developed through the collaboration between [Asahi Ushio](https://asahiushio.com) and [Kotoba Technologies](https://twitter.com/kotoba_tech) Following table presents the raw CER (unlike usual CER where the punctuations are removed before computing the metrics, see the evaluation script [here](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.1/blob/main/run_short_form_eval.py)) along with the. | model | [CommonVoice 8 (Japanese test set)](https://huggingface.co/datasets/japanese-asr/ja_asr.common_voice_8_0) | [JSUT Basic 5000](https://huggingface.co/datasets/japanese-asr/ja_asr.jsut_basic5000) | [ReazonSpeech (held out test set)](https://huggingface.co/datasets/japanese-asr/ja_asr.reazonspeech_test) | |:--------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------:|----------------------------------------------------------------------------------------:|------------------------------------------------------------------------------------------------------------:| | [kotoba-tech/kotoba-whisper-v2.0](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0) | 17.6 | 15.4 | 17.4 | | [kotoba-tech/kotoba-whisper-v2.1](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.1) | 17.7 | 15.4 | 17 | | [kotoba-tech/kotoba-whisper-v1.0](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0) | 17.8 | 15.2 | 17.8 | | [kotoba-tech/kotoba-whisper-v1.1](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.1) | 17.9 | 15 | 17.8 | | [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | 15.3 | 13.4 | 20.5 | | [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) | 15.9 | 10.6 | 34.6 | | [openai/whisper-large](https://huggingface.co/openai/whisper-large) | 16.6 | 11.3 | 40.7 | | [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) | 17.9 | 13.1 | 39.3 | | [openai/whisper-base](https://huggingface.co/openai/whisper-base) | 34.5 | 26.4 | 76 | | [openai/whisper-small](https://huggingface.co/openai/whisper-small) | 21.5 | 18.9 | 48.1 | | [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 58.8 | 38.3 | 153.3 | Regarding to the normalized CER, since those update from v1.1 will be removed by the normalization, kotoba-tech/kotoba-whisper-v1.1 marks the same CER values as [kotoba-tech/kotoba-whisper-v1.0](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0). ### Latency Kotoba-whisper-v1.1 improves the punctuation and the timestamp of the output from Kotoba-whisper-v1.0. However, since we apply the punctuator and stable-ts to each chunk, we need to obtain the timestamps, which decreases the latency of the original kotoba-whisper-v1.0. See the following table comparing the inference speed on transcribing **50min** Japanese speech audio, where we report the average over five independent runs. | model | return_timestamps | time (mean) | |:---------------------------------------------------------|:--------------------|--------------:| | kotoba-tech/kotoba-whisper-v1.0 | False | 10.8 | | kotoba-tech/kotoba-whisper-v1.0 | True | 15.7 | | kotoba-tech/kotoba-whisper-v1.1 (punctuator + stable-ts) | True | 17.9 | | kotoba-tech/kotoba-whisper-v1.1 (punctuator) | True | 17.7 | | kotoba-tech/kotoba-whisper-v1.1 (stable-ts) | True | 16.1 | | openai/whisper-large-v3 | False | 29.1 | | openai/whisper-large-v3 | True | 37.9 | See the full table [here](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.1/raw/main/latency.csv). ## Transformers Usage Kotoba-Whisper-v1.1 is supported in the Hugging Face 🤗 Transformers library from version 4.39 onwards. To run the model, first install the latest version of Transformers. ```bash pip install --upgrade pip pip install --upgrade transformers accelerate torchaudio pip install stable-ts==2.16.0 pip install punctuators==0.0.5 ``` ### Transcription The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class to transcribe audio files as follows: ```python import torch from transformers import pipeline from datasets import load_dataset # config model_id = "kotoba-tech/kotoba-whisper-v1.1" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 device = "cuda:0" if torch.cuda.is_available() else "cpu" model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {} generate_kwargs = {"language": "ja", "task": "transcribe"} # load model pipe = pipeline( model=model_id, torch_dtype=torch_dtype, device=device, model_kwargs=model_kwargs, batch_size=16, trust_remote_code=True, punctuator=True ) # load sample audio dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test") sample = dataset[0]["audio"] # run inference result = pipe(sample, chunk_length_s=15, return_timestamps=True, generate_kwargs=generate_kwargs) print(result) ``` - To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline: ```diff - result = pipe(sample, return_timestamps=True, generate_kwargs=generate_kwargs) + result = pipe("audio.mp3", return_timestamps=True, generate_kwargs=generate_kwargs) ``` - To deactivate punctuator: ```diff - punctuator=True, + punctuator=False, ``` ### Transcription with Prompt Kotoba-whisper can generate transcription with prompting as below: ```python import re import torch from transformers import pipeline from datasets import load_dataset # config model_id = "kotoba-tech/kotoba-whisper-v1.1" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 device = "cuda:0" if torch.cuda.is_available() else "cpu" model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {} generate_kwargs = {"language": "japanese", "task": "transcribe"} # load model pipe = pipeline( model=model_id, torch_dtype=torch_dtype, device=device, model_kwargs=model_kwargs, batch_size=16, trust_remote_code=True ) # load sample audio dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test") # --- Without prompt --- text = pipe(dataset[10]["audio"], chunk_length_s=15, generate_kwargs=generate_kwargs)['text'] print(text) # 81歳、力強い走りに変わってきます。 # --- With prompt ---: Let's change `81` to `91`. prompt = "91歳" generate_kwargs['prompt_ids'] = pipe.tokenizer.get_prompt_ids(prompt, return_tensors="pt").to(device) text = pipe(dataset[10]["audio"], generate_kwargs=generate_kwargs)['text'] # currently the pipeline for ASR appends the prompt at the beginning of the transcription, so remove it text = re.sub(rf"\A\s*{prompt}\s*", "", text) print(text) # あっぶったでもスルガさん、91歳、力強い走りに変わってきます。 ``` ### Flash Attention 2 We recommend using [Flash-Attention 2](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2) if your GPU allows for it. To do so, you first need to install [Flash Attention](https://github.com/Dao-AILab/flash-attention): ``` pip install flash-attn --no-build-isolation ``` Then pass `attn_implementation="flash_attention_2"` to `from_pretrained`: ```diff - model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {} + model_kwargs = {"attn_implementation": "flash_attention_2"} if torch.cuda.is_available() else {} ``` ## Acknowledgements * [OpenAI](https://openai.com/) for the Whisper [model](https://huggingface.co/openai/whisper-large-v3). * Hugging Face 🤗 [Transformers](https://github.com/huggingface/transformers) for the model integration. * Hugging Face 🤗 for the [Distil-Whisper codebase](https://github.com/huggingface/distil-whisper). * [Reazon Human Interaction Lab](https://research.reazon.jp/) for the [ReazonSpeech dataset](https://huggingface.co/datasets/reazon-research/reazonspeech).
RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf
RichardErkhov
2024-10-23T00:15:09Z
6
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-22T23:38:25Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) finbot-transofrmer-based-phi3.5 - GGUF - Model creator: https://huggingface.co/sujithatz/ - Original model: https://huggingface.co/sujithatz/finbot-transofrmer-based-phi3.5/ | Name | Quant method | Size | | ---- | ---- | ---- | | [finbot-transofrmer-based-phi3.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q2_K.gguf) | Q2_K | 1.32GB | | [finbot-transofrmer-based-phi3.5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.IQ3_XS.gguf) | IQ3_XS | 1.51GB | | [finbot-transofrmer-based-phi3.5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.IQ3_S.gguf) | IQ3_S | 1.57GB | | [finbot-transofrmer-based-phi3.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q3_K_S.gguf) | Q3_K_S | 1.57GB | | [finbot-transofrmer-based-phi3.5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.IQ3_M.gguf) | IQ3_M | 1.73GB | | [finbot-transofrmer-based-phi3.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q3_K.gguf) | Q3_K | 1.82GB | | [finbot-transofrmer-based-phi3.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q3_K_M.gguf) | Q3_K_M | 1.82GB | | [finbot-transofrmer-based-phi3.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q3_K_L.gguf) | Q3_K_L | 1.94GB | | [finbot-transofrmer-based-phi3.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.IQ4_XS.gguf) | IQ4_XS | 1.93GB | | [finbot-transofrmer-based-phi3.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q4_0.gguf) | Q4_0 | 2.03GB | | [finbot-transofrmer-based-phi3.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.IQ4_NL.gguf) | IQ4_NL | 2.04GB | | [finbot-transofrmer-based-phi3.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q4_K_S.gguf) | Q4_K_S | 2.04GB | | [finbot-transofrmer-based-phi3.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q4_K.gguf) | Q4_K | 2.23GB | | [finbot-transofrmer-based-phi3.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q4_K_M.gguf) | Q4_K_M | 2.23GB | | [finbot-transofrmer-based-phi3.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q4_1.gguf) | Q4_1 | 2.24GB | | [finbot-transofrmer-based-phi3.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q5_0.gguf) | Q5_0 | 2.46GB | | [finbot-transofrmer-based-phi3.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q5_K_S.gguf) | Q5_K_S | 2.46GB | | [finbot-transofrmer-based-phi3.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q5_K.gguf) | Q5_K | 2.62GB | | [finbot-transofrmer-based-phi3.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q5_K_M.gguf) | Q5_K_M | 2.62GB | | [finbot-transofrmer-based-phi3.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q5_1.gguf) | Q5_1 | 2.68GB | | [finbot-transofrmer-based-phi3.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q6_K.gguf) | Q6_K | 2.92GB | | [finbot-transofrmer-based-phi3.5.Q8_0.gguf](https://huggingface.co/RichardErkhov/sujithatz_-_finbot-transofrmer-based-phi3.5-gguf/blob/main/finbot-transofrmer-based-phi3.5.Q8_0.gguf) | Q8_0 | 3.78GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NanQiangHF/llama3_8b_instruct_BWRM
NanQiangHF
2024-10-23T00:13:14Z
9
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "generated_from_trainer", "trl", "reward-trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-09-24T01:53:52Z
--- library_name: transformers tags: - generated_from_trainer - trl - reward-trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct model_name: llama3_8b_instruct_BWRM licence: license --- # Model Card for llama3_8b_instruct_BWRM This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="NanQiangHF/llama3_8b_instruct_BWRM", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with Reward. ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0.dev0 - Pytorch: 2.3.0 - Datasets: 3.0.0 - Tokenizers: 0.20.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
meandyou200175/bge-m3-v2-rerank-finetune
meandyou200175
2024-10-23T00:00:15Z
9
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-22T14:53:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Gemmasutra-9B-v1-GGUF
mradermacher
2024-10-22T23:52:20Z
74
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:TheDrummer/Gemmasutra-9B-v1", "base_model:quantized:TheDrummer/Gemmasutra-9B-v1", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-22T23:28:47Z
--- base_model: TheDrummer/Gemmasutra-9B-v1 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/TheDrummer/Gemmasutra-9B-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Gemmasutra-9B-v1-GGUF/resolve/main/Gemmasutra-9B-v1.Q2_K.gguf) | Q2_K | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Gemmasutra-9B-v1-GGUF/resolve/main/Gemmasutra-9B-v1.Q3_K_S.gguf) | Q3_K_S | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Gemmasutra-9B-v1-GGUF/resolve/main/Gemmasutra-9B-v1.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Gemmasutra-9B-v1-GGUF/resolve/main/Gemmasutra-9B-v1.Q3_K_L.gguf) | Q3_K_L | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Gemmasutra-9B-v1-GGUF/resolve/main/Gemmasutra-9B-v1.IQ4_XS.gguf) | IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Gemmasutra-9B-v1-GGUF/resolve/main/Gemmasutra-9B-v1.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gemmasutra-9B-v1-GGUF/resolve/main/Gemmasutra-9B-v1.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gemmasutra-9B-v1-GGUF/resolve/main/Gemmasutra-9B-v1.Q5_K_S.gguf) | Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/Gemmasutra-9B-v1-GGUF/resolve/main/Gemmasutra-9B-v1.Q5_K_M.gguf) | Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Gemmasutra-9B-v1-GGUF/resolve/main/Gemmasutra-9B-v1.Q6_K.gguf) | Q6_K | 7.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Gemmasutra-9B-v1-GGUF/resolve/main/Gemmasutra-9B-v1.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Gemmasutra-9B-v1-GGUF/resolve/main/Gemmasutra-9B-v1.f16.gguf) | f16 | 18.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Izdibay/llama3-8b-quantized-f16
Izdibay
2024-10-22T23:50:53Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T23:46:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
city96/stable-diffusion-3-medium-gguf
city96
2024-10-22T23:43:49Z
676
5
gguf
[ "gguf", "text-to-image", "image-generation", "stable-diffusion", "en", "base_model:stabilityai/stable-diffusion-3-medium", "base_model:quantized:stabilityai/stable-diffusion-3-medium", "license:other", "region:us" ]
text-to-image
2024-10-22T22:53:51Z
--- base_model: stabilityai/stable-diffusion-3-medium library_name: gguf quantized_by: city96 tags: - text-to-image - image-generation - stable-diffusion language: - en license: other license_name: stabilityai-ai-community license_link: LICENSE.md --- This is a direct GGUF conversion of [stabilityai/stable-diffusion-3-medium](https://huggingface.co/stabilityai/stable-diffusion-3-medium) As this is a quantized model not a finetune, all the same restrictions/original license terms still apply. The model files can be used with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions. Please refer to [this chart](https://github.com/ggerganov/llama.cpp/blob/master/examples/perplexity/README.md#llama-3-8b-scoreboard) for a basic overview of quantization types.
mradermacher/Buddy-2B-v1-GGUF
mradermacher
2024-10-22T23:32:10Z
12
0
transformers
[ "transformers", "gguf", "en", "base_model:TheDrummer/Buddy-2B-v1", "base_model:quantized:TheDrummer/Buddy-2B-v1", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-22T23:26:26Z
--- base_model: TheDrummer/Buddy-2B-v1 language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/TheDrummer/Buddy-2B-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Buddy-2B-v1-GGUF/resolve/main/Buddy-2B-v1.Q2_K.gguf) | Q2_K | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Buddy-2B-v1-GGUF/resolve/main/Buddy-2B-v1.Q3_K_S.gguf) | Q3_K_S | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Buddy-2B-v1-GGUF/resolve/main/Buddy-2B-v1.Q3_K_M.gguf) | Q3_K_M | 1.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Buddy-2B-v1-GGUF/resolve/main/Buddy-2B-v1.Q3_K_L.gguf) | Q3_K_L | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/Buddy-2B-v1-GGUF/resolve/main/Buddy-2B-v1.IQ4_XS.gguf) | IQ4_XS | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/Buddy-2B-v1-GGUF/resolve/main/Buddy-2B-v1.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Buddy-2B-v1-GGUF/resolve/main/Buddy-2B-v1.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Buddy-2B-v1-GGUF/resolve/main/Buddy-2B-v1.Q5_K_S.gguf) | Q5_K_S | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Buddy-2B-v1-GGUF/resolve/main/Buddy-2B-v1.Q5_K_M.gguf) | Q5_K_M | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Buddy-2B-v1-GGUF/resolve/main/Buddy-2B-v1.Q6_K.gguf) | Q6_K | 2.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Buddy-2B-v1-GGUF/resolve/main/Buddy-2B-v1.Q8_0.gguf) | Q8_0 | 2.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Buddy-2B-v1-GGUF/resolve/main/Buddy-2B-v1.f16.gguf) | f16 | 5.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
rohit0619/speecht5_finetuned_rohit_hindi
rohit0619
2024-10-22T22:52:36Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2024-10-22T22:12:41Z
--- library_name: transformers license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer model-index: - name: speecht5_finetuned_rohit_hindi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_rohit_hindi This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5786 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 800 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8562 | 0.3366 | 100 | 0.7391 | | 0.7663 | 0.6731 | 200 | 0.6572 | | 0.7168 | 1.0097 | 300 | 0.6354 | | 0.6915 | 1.3462 | 400 | 0.6043 | | 0.6651 | 1.6828 | 500 | 0.6000 | | 0.6573 | 2.0194 | 600 | 0.5836 | | 0.6347 | 2.3559 | 700 | 0.5818 | | 0.6554 | 2.6925 | 800 | 0.5786 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
mradermacher/Atlas-Chat-27B-GGUF
mradermacher
2024-10-22T22:37:08Z
27
0
transformers
[ "transformers", "gguf", "conversational", "ar", "ary", "dataset:MBZUAI-Paris/Darija-SFT-Mixture", "base_model:MBZUAI-Paris/Atlas-Chat-27B", "base_model:quantized:MBZUAI-Paris/Atlas-Chat-27B", "license:gemma", "endpoints_compatible", "region:us" ]
null
2024-10-22T10:56:21Z
--- base_model: MBZUAI-Paris/Atlas-Chat-27B datasets: - MBZUAI-Paris/Darija-SFT-Mixture extra_gated_button_content: Acknowledge license language: - ar - ary library_name: transformers license: gemma quantized_by: mradermacher tags: - conversational --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/MBZUAI-Paris/Atlas-Chat-27B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Atlas-Chat-27B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Atlas-Chat-27B-GGUF/resolve/main/Atlas-Chat-27B.Q2_K.gguf) | Q2_K | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/Atlas-Chat-27B-GGUF/resolve/main/Atlas-Chat-27B.Q3_K_S.gguf) | Q3_K_S | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/Atlas-Chat-27B-GGUF/resolve/main/Atlas-Chat-27B.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Atlas-Chat-27B-GGUF/resolve/main/Atlas-Chat-27B.Q3_K_L.gguf) | Q3_K_L | 14.6 | | | [GGUF](https://huggingface.co/mradermacher/Atlas-Chat-27B-GGUF/resolve/main/Atlas-Chat-27B.IQ4_XS.gguf) | IQ4_XS | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/Atlas-Chat-27B-GGUF/resolve/main/Atlas-Chat-27B.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Atlas-Chat-27B-GGUF/resolve/main/Atlas-Chat-27B.Q4_K_M.gguf) | Q4_K_M | 16.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Atlas-Chat-27B-GGUF/resolve/main/Atlas-Chat-27B.Q5_K_S.gguf) | Q5_K_S | 19.0 | | | [GGUF](https://huggingface.co/mradermacher/Atlas-Chat-27B-GGUF/resolve/main/Atlas-Chat-27B.Q5_K_M.gguf) | Q5_K_M | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/Atlas-Chat-27B-GGUF/resolve/main/Atlas-Chat-27B.Q6_K.gguf) | Q6_K | 22.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Atlas-Chat-27B-GGUF/resolve/main/Atlas-Chat-27B.Q8_0.gguf) | Q8_0 | 29.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
gianrp6/Bodybuidng2
gianrp6
2024-10-22T22:31:28Z
7
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-22T22:31:05Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: His chest is prominent, showcasing well-defined pectoral muscles. parameters: negative_prompt: bad photo output: url: >- images/photo-of-profile-of-software-engineer-for-linkedin-WqCxEu7cSOWKOdzsxdcY5Q-Se-HLGyBTGmwktiEdYzfCA.jpeg base_model: black-forest-labs/FLUX.1-dev instance_prompt: Muscle, hunk, sexy men, men, male, 1male, pectoral, bodybuilding license: other license_name: license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # Bodybuidng2 <Gallery /> ## Trigger words You should use `Muscle` to trigger the image generation. You should use `hunk` to trigger the image generation. You should use `sexy men` to trigger the image generation. You should use `men` to trigger the image generation. You should use `male` to trigger the image generation. You should use `1male` to trigger the image generation. You should use `pectoral` to trigger the image generation. You should use `bodybuilding` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/gianrp6/Bodybuidng2/tree/main) them in the Files & versions tab.
neopolita/granite-3.0-8b-base-gguf
neopolita
2024-10-22T22:23:30Z
9
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-10-22T21:25:29Z
--- {} --- # GGUF quants for [**ibm-granite/granite-3.0-8b-base**](https://huggingface.co/ibm-granite/granite-3.0-8b-base) using [llama.cpp](https://github.com/ggerganov/llama.cpp) **Terms of Use**: Please check the [**original model**](https://huggingface.co/ibm-granite/granite-3.0-8b-base) <picture> <img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png"> </picture> ## Quants * `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors. * `q3_k_s`: Uses Q3_K for all tensors * `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q4_0`: Original quant method, 4-bit. * `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. * `q4_k_s`: Uses Q4_K for all tensors * `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * `q5_0`: Higher accuracy, higher resource usage and slower inference. * `q5_1`: Even higher accuracy, resource usage and slower inference. * `q5_k_s`: Uses Q5_K for all tensors * `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * `q6_k`: Uses Q8_K for all tensors * `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
gianrp6/lorakitconnorv2
gianrp6
2024-10-22T22:20:49Z
7
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-22T22:18:16Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: kitconnor dressed as captain america parameters: negative_prompt: bad photo output: url: images/flux (12).png base_model: black-forest-labs/FLUX.1-dev instance_prompt: kitconnor license: other license_name: license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # lorakitconnor <Gallery /> ## Model description no copy ## Trigger words You should use `kitconnor` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/gianrp6/lorakitconnorv2/tree/main) them in the Files & versions tab.
nbroad/bert-base-uncased
nbroad
2024-10-22T22:14:15Z
7
0
null
[ "pytorch", "tf", "jax", "rust", "coreml", "onnx", "safetensors", "bert", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "region:us" ]
null
2024-10-22T22:14:15Z
--- language: en tags: - exbert license: apache-2.0 datasets: - bookcorpus - wikipedia --- # BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model variations BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers. Chinese and multilingual uncased and cased versions followed shortly after. Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models. Other 24 smaller models are released afterward. The detailed release history can be found on the [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md) on github. | Model | #params | Language | |------------------------|--------------------------------|-------| | [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) | 110M | English | | [`bert-large-uncased`](https://huggingface.co/bert-large-uncased) | 340M | English | sub | [`bert-base-cased`](https://huggingface.co/bert-base-cased) | 110M | English | | [`bert-large-cased`](https://huggingface.co/bert-large-cased) | 340M | English | | [`bert-base-chinese`](https://huggingface.co/bert-base-chinese) | 110M | Chinese | | [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) | 110M | Multiple | | [`bert-large-uncased-whole-word-masking`](https://huggingface.co/bert-large-uncased-whole-word-masking) | 340M | English | | [`bert-large-cased-whole-word-masking`](https://huggingface.co/bert-large-cased-whole-word-masking) | 340M | English | ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.1073106899857521, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a role model. [SEP]", 'score': 0.08774490654468536, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a new model. [SEP]", 'score': 0.05338378623127937, 'token': 2047, 'token_str': 'new'}, {'sequence': "[CLS] hello i'm a super model. [SEP]", 'score': 0.04667217284440994, 'token': 3565, 'token_str': 'super'}, {'sequence': "[CLS] hello i'm a fine model. [SEP]", 'score': 0.027095865458250046, 'token': 2986, 'token_str': 'fine'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = TFBertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("The man worked as a [MASK].") [{'sequence': '[CLS] the man worked as a carpenter. [SEP]', 'score': 0.09747550636529922, 'token': 10533, 'token_str': 'carpenter'}, {'sequence': '[CLS] the man worked as a waiter. [SEP]', 'score': 0.0523831807076931, 'token': 15610, 'token_str': 'waiter'}, {'sequence': '[CLS] the man worked as a barber. [SEP]', 'score': 0.04962705448269844, 'token': 13362, 'token_str': 'barber'}, {'sequence': '[CLS] the man worked as a mechanic. [SEP]', 'score': 0.03788609802722931, 'token': 15893, 'token_str': 'mechanic'}, {'sequence': '[CLS] the man worked as a salesman. [SEP]', 'score': 0.037680890411138535, 'token': 18968, 'token_str': 'salesman'}] >>> unmasker("The woman worked as a [MASK].") [{'sequence': '[CLS] the woman worked as a nurse. [SEP]', 'score': 0.21981462836265564, 'token': 6821, 'token_str': 'nurse'}, {'sequence': '[CLS] the woman worked as a waitress. [SEP]', 'score': 0.1597415804862976, 'token': 13877, 'token_str': 'waitress'}, {'sequence': '[CLS] the woman worked as a maid. [SEP]', 'score': 0.1154729500412941, 'token': 10850, 'token_str': 'maid'}, {'sequence': '[CLS] the woman worked as a prostitute. [SEP]', 'score': 0.037968918681144714, 'token': 19215, 'token_str': 'prostitute'}, {'sequence': '[CLS] the woman worked as a cook. [SEP]', 'score': 0.03042375110089779, 'token': 5660, 'token_str': 'cook'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average | |:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:| | | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=bert-base-uncased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MakAIHealthLab/vit-base-patch16-224-in21k-finetuned-papsmear
MakAIHealthLab
2024-10-22T22:12:47Z
95
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-10-07T10:16:15Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-in21k-finetuned-papsmear results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9338235294117647 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-papsmear This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2825 - Accuracy: 0.9338 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9231 | 9 | 1.7346 | 0.2647 | | 1.7645 | 1.9487 | 19 | 1.6152 | 0.3088 | | 1.661 | 2.9744 | 29 | 1.4663 | 0.4118 | | 1.496 | 4.0 | 39 | 1.2989 | 0.4853 | | 1.3097 | 4.9231 | 48 | 1.1491 | 0.5588 | | 1.091 | 5.9487 | 58 | 0.9933 | 0.7206 | | 0.9088 | 6.9744 | 68 | 0.9171 | 0.6985 | | 0.7858 | 8.0 | 78 | 0.8301 | 0.7721 | | 0.7016 | 8.9231 | 87 | 0.7925 | 0.7353 | | 0.6136 | 9.9487 | 97 | 0.6992 | 0.7647 | | 0.532 | 10.9744 | 107 | 0.6401 | 0.8309 | | 0.5018 | 12.0 | 117 | 0.5787 | 0.8382 | | 0.4279 | 12.9231 | 126 | 0.6130 | 0.8088 | | 0.4116 | 13.9487 | 136 | 0.5090 | 0.8382 | | 0.3848 | 14.9744 | 146 | 0.5165 | 0.8676 | | 0.3449 | 16.0 | 156 | 0.4843 | 0.8382 | | 0.3008 | 16.9231 | 165 | 0.5460 | 0.8456 | | 0.2797 | 17.9487 | 175 | 0.4985 | 0.8309 | | 0.2696 | 18.9744 | 185 | 0.5586 | 0.8456 | | 0.2633 | 20.0 | 195 | 0.4349 | 0.9044 | | 0.2569 | 20.9231 | 204 | 0.4017 | 0.8897 | | 0.27 | 21.9487 | 214 | 0.4758 | 0.8603 | | 0.2706 | 22.9744 | 224 | 0.4133 | 0.8897 | | 0.2211 | 24.0 | 234 | 0.3844 | 0.9118 | | 0.1977 | 24.9231 | 243 | 0.3497 | 0.9265 | | 0.1969 | 25.9487 | 253 | 0.3736 | 0.9044 | | 0.1776 | 26.9744 | 263 | 0.3797 | 0.9044 | | 0.1787 | 28.0 | 273 | 0.3949 | 0.8897 | | 0.18 | 28.9231 | 282 | 0.3278 | 0.9265 | | 0.1797 | 29.9487 | 292 | 0.3615 | 0.9044 | | 0.1665 | 30.9744 | 302 | 0.4174 | 0.8603 | | 0.163 | 32.0 | 312 | 0.3574 | 0.8971 | | 0.1498 | 32.9231 | 321 | 0.3591 | 0.9044 | | 0.1405 | 33.9487 | 331 | 0.3017 | 0.9191 | | 0.155 | 34.9744 | 341 | 0.3303 | 0.9265 | | 0.1519 | 36.0 | 351 | 0.3559 | 0.8971 | | 0.1415 | 36.9231 | 360 | 0.2890 | 0.9191 | | 0.1256 | 37.9487 | 370 | 0.3445 | 0.8897 | | 0.1217 | 38.9744 | 380 | 0.3435 | 0.9118 | | 0.1285 | 40.0 | 390 | 0.3025 | 0.9191 | | 0.1285 | 40.9231 | 399 | 0.3602 | 0.8824 | | 0.1301 | 41.9487 | 409 | 0.3336 | 0.8897 | | 0.1243 | 42.9744 | 419 | 0.2825 | 0.9338 | | 0.1191 | 44.0 | 429 | 0.2835 | 0.9265 | | 0.1221 | 44.9231 | 438 | 0.2724 | 0.9191 | | 0.1151 | 45.9487 | 448 | 0.2708 | 0.9191 | | 0.1195 | 46.1538 | 450 | 0.2707 | 0.9191 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.2 - Tokenizers 0.20.1
anmittal1/fdk-sd3-lora-2
anmittal1
2024-10-22T22:09:21Z
5
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "sd3", "sd3-diffusers", "template:sd-lora", "base_model:stabilityai/stable-diffusion-3-medium-diffusers", "base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers", "license:openrail++", "region:us" ]
text-to-image
2024-10-21T15:21:36Z
--- base_model: stabilityai/stable-diffusion-3-medium-diffusers library_name: diffusers license: openrail++ tags: - text-to-image - diffusers-training - diffusers - lora - sd3 - sd3-diffusers - template:sd-lora - text-to-image - diffusers-training - diffusers - lora - sd3 - sd3-diffusers - template:sd-lora instance_prompt: a photo of [V] widget: - text: A photo of [V] output: url: image_0.png - text: A photo of [V] output: url: image_1.png - text: A photo of [V] output: url: image_2.png - text: A photo of [V] output: url: image_3.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SD3 DreamBooth LoRA - anmittal1/fdk-sd3-lora-2 <Gallery /> ## Model description These are anmittal1/fdk-sd3-lora-2 DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md). Was LoRA for the text encoder enabled? False. ## Trigger words You should use `a photo of [V]` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](anmittal1/fdk-sd3-lora-2/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-3-medium-diffusers', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('anmittal1/fdk-sd3-lora-2', weight_name='pytorch_lora_weights.safetensors') image = pipeline('A photo of [V]').images[0] ``` ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/anmittal1/fdk-sd3-lora-2/blob/main/diffusers_lora_weights.safetensors)**. - Rename it and place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
djuna/MN-Chinofun
djuna
2024-10-22T22:05:48Z
9
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1", "base_model:merge:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1", "base_model:GalrionSoftworks/Canidori-12B-v1", "base_model:merge:GalrionSoftworks/Canidori-12B-v1", "base_model:Nohobby/MN-12B-Siskin-v0.2", "base_model:merge:Nohobby/MN-12B-Siskin-v0.2", "base_model:RozGrov/NemoDori-v0.2.2-12B-MN-ties", "base_model:merge:RozGrov/NemoDori-v0.2.2-12B-MN-ties", "base_model:spow12/ChatWaifu_v1.4", "base_model:merge:spow12/ChatWaifu_v1.4", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-16T13:24:12Z
--- library_name: transformers tags: - mergekit - merge base_model: - RozGrov/NemoDori-v0.2.2-12B-MN-ties - spow12/ChatWaifu_v1.4 - Nohobby/MN-12B-Siskin-v0.2 - GalrionSoftworks/Canidori-12B-v1 - ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1 model-index: - name: MN-Chinofun results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 61.1 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/MN-Chinofun name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 28.48 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/MN-Chinofun name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 10.5 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/MN-Chinofun name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 6.15 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/MN-Chinofun name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 10.38 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/MN-Chinofun name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 28.92 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/MN-Chinofun name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1) as a base. ### Models Merged The following models were included in the merge: * [RozGrov/NemoDori-v0.2.2-12B-MN-ties](https://huggingface.co/RozGrov/NemoDori-v0.2.2-12B-MN-ties) * [spow12/ChatWaifu_v1.4](https://huggingface.co/spow12/ChatWaifu_v1.4) * [Nohobby/MN-12B-Siskin-v0.2](https://huggingface.co/Nohobby/MN-12B-Siskin-v0.2) * [GalrionSoftworks/Canidori-12B-v1](https://huggingface.co/GalrionSoftworks/Canidori-12B-v1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Nohobby/MN-12B-Siskin-v0.2 - model: spow12/ChatWaifu_v1.4 - model: RozGrov/NemoDori-v0.2.2-12B-MN-ties - model: GalrionSoftworks/Canidori-12B-v1 merge_method: model_stock base_model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1 dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_djuna__MN-Chinofun) | Metric |Value| |-------------------|----:| |Avg. |24.26| |IFEval (0-Shot) |61.10| |BBH (3-Shot) |28.48| |MATH Lvl 5 (4-Shot)|10.50| |GPQA (0-shot) | 6.15| |MuSR (0-shot) |10.38| |MMLU-PRO (5-shot) |28.92|
lizchu414/training-flan-t5-small-finetuned-squad
lizchu414
2024-10-22T21:51:20Z
5
0
transformers
[ "transformers", "safetensors", "t5", "question-answering", "generated_from_trainer", "text-generation-inference", "endpoints_compatible", "region:us" ]
question-answering
2024-10-22T21:51:12Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: training-flan-t5-small-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # training-flan-t5-small-finetuned-squad This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7368 | 1.0 | 2797 | 1.3536 | | 1.4998 | 2.0 | 5594 | 1.2519 | | 1.3924 | 3.0 | 8391 | 1.2266 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.2.2+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
itoo944/bert-wikipedia-completion
itoo944
2024-10-22T21:42:18Z
5
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-10-22T21:42:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mateuo/visado
mateuo
2024-10-22T21:22:51Z
5
0
null
[ "safetensors", "phi3", "nlp", "text-generation", "conversational", "custom_code", "multilingual", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:finetune:microsoft/Phi-3.5-mini-instruct", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2024-10-20T12:03:34Z
--- license: mit language: - multilingual tags: - nlp base_model: microsoft/Phi-3.5-mini-instruct pipeline_tag: text-generation --- # NuExtract-v1.5 by NuMind 🔥 NuExtract-v1.5 is a fine-tuning of [Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct), trained on a private high-quality dataset for structured information extraction. It supports long documents and several languages (English, French, Spanish, German, Portuguese, and Italian). To use the model, provide an input text and a JSON template describing the information you need to extract. Note: This model is trained to prioritize pure extraction, so in most cases all text generated by the model is present as is in the original text. Check out the [blog post](https://numind.ai/blog/nuextract-1-5---multilingual-infinite-context-still-small-and-better-than-gpt-4o). Try it here: [Playground](https://huggingface.co/spaces/numind/NuExtract-v1.5) We also provide a tiny (0.5B) version which is based on Qwen2.5-0.5B: [NuExtract-tiny-v1.5](https://huggingface.co/numind/NuExtract-tiny-v1.5) ## Benchmark Zero-shot performance (English): <p align="left"> <img src="english_bench.png" style="height: auto;"> </p> Zero-shot performance (Multilingual): <p align="left"> <img src="multilingual_bench.png" style="height: auto;"> </p> Long documents (8-10k tokens): <p align="left"> <img src="8-10_long_context.png" style="height: auto;"> </p> Very long documents (10-20k tokens): <p align="left"> <img src="10-20_long_context.png" style="height: auto;"> </p> Few-shot fine-tuning: <p align="left"> <img src="fewshot_bench.png" style="height: auto;"> </p> ## Usage To use the model: ```python import json from transformers import AutoModelForCausalLM, AutoTokenizer def predict_NuExtract(model, tokenizer, texts, template, batch_size=1, max_length=10_000, max_new_tokens=4_000): template = json.dumps(json.loads(template), indent=4) prompts = [f"""<|input|>\n### Template:\n{template}\n### Text:\n{text}\n\n<|output|>""" for text in texts] outputs = [] with torch.no_grad(): for i in range(0, len(prompts), batch_size): batch_prompts = prompts[i:i+batch_size] batch_encodings = tokenizer(batch_prompts, return_tensors="pt", truncation=True, padding=True, max_length=max_length).to(model.device) pred_ids = model.generate(**batch_encodings, max_new_tokens=max_new_tokens) outputs += tokenizer.batch_decode(pred_ids, skip_special_tokens=True) return [output.split("<|output|>")[1] for output in outputs] model_name = "numind/NuExtract-v1.5" device = "cuda" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, trust_remote_code=True).to(device).eval() tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) text = """We introduce Mistral 7B, a 7–billion-parameter language model engineered for superior performance and efficiency. Mistral 7B outperforms the best open 13B model (Llama 2) across all evaluated benchmarks, and the best released 34B model (Llama 1) in reasoning, mathematics, and code generation. Our model leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost. We also provide a model fine-tuned to follow instructions, Mistral 7B – Instruct, that surpasses Llama 2 13B – chat model both on human and automated benchmarks. Our models are released under the Apache 2.0 license. Code: <https://github.com/mistralai/mistral-src> Webpage: <https://mistral.ai/news/announcing-mistral-7b/>""" template = """{ "Model": { "Name": "", "Number of parameters": "", "Number of max token": "", "Architecture": [] }, "Usage": { "Use case": [], "Licence": "" } }""" prediction = predict_NuExtract(model, tokenizer, [text], template)[0] print(prediction) ``` Sliding window prompting: ```python import json MAX_INPUT_SIZE = 20_000 MAX_NEW_TOKENS = 6000 def clean_json_text(text): text = text.strip() text = text.replace("\#", "#").replace("\&", "&") return text def predict_chunk(text, template, current, model, tokenizer): current = clean_json_text(current) input_llm = f"<|input|>\n### Template:\n{template}\n### Current:\n{current}\n### Text:\n{text}\n\n<|output|>" + "{" input_ids = tokenizer(input_llm, return_tensors="pt", truncation=True, max_length=MAX_INPUT_SIZE).to("cuda") output = tokenizer.decode(model.generate(**input_ids, max_new_tokens=MAX_NEW_TOKENS)[0], skip_special_tokens=True) return clean_json_text(output.split("<|output|>")[1]) def split_document(document, window_size, overlap): tokens = tokenizer.tokenize(document) print(f"\tLength of document: {len(tokens)} tokens") chunks = [] if len(tokens) > window_size: for i in range(0, len(tokens), window_size-overlap): print(f"\t{i} to {i + len(tokens[i:i + window_size])}") chunk = tokenizer.convert_tokens_to_string(tokens[i:i + window_size]) chunks.append(chunk) if i + len(tokens[i:i + window_size]) >= len(tokens): break else: chunks.append(document) print(f"\tSplit into {len(chunks)} chunks") return chunks def handle_broken_output(pred, prev): try: if all([(v in ["", []]) for v in json.loads(pred).values()]): # if empty json, return previous pred = prev except: # if broken json, return previous pred = prev return pred def sliding_window_prediction(text, template, model, tokenizer, window_size=4000, overlap=128): # split text into chunks of n tokens tokens = tokenizer.tokenize(text) chunks = split_document(text, window_size, overlap) # iterate over text chunks prev = template for i, chunk in enumerate(chunks): print(f"Processing chunk {i}...") pred = predict_chunk(chunk, template, prev, model, tokenizer) # handle broken output pred = handle_broken_output(pred, prev) # iterate prev = pred return pred ```
RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf
RichardErkhov
2024-10-22T21:01:00Z
36
0
null
[ "gguf", "arxiv:2306.01708", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-22T19:36:54Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Phi-3.5-mini-TitanFusion-0.1 - GGUF - Model creator: https://huggingface.co/bunnycore/ - Original model: https://huggingface.co/bunnycore/Phi-3.5-mini-TitanFusion-0.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Phi-3.5-mini-TitanFusion-0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q2_K.gguf) | Q2_K | 1.32GB | | [Phi-3.5-mini-TitanFusion-0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.IQ3_XS.gguf) | IQ3_XS | 1.51GB | | [Phi-3.5-mini-TitanFusion-0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.IQ3_S.gguf) | IQ3_S | 1.57GB | | [Phi-3.5-mini-TitanFusion-0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q3_K_S.gguf) | Q3_K_S | 1.57GB | | [Phi-3.5-mini-TitanFusion-0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.IQ3_M.gguf) | IQ3_M | 1.73GB | | [Phi-3.5-mini-TitanFusion-0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q3_K.gguf) | Q3_K | 1.82GB | | [Phi-3.5-mini-TitanFusion-0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q3_K_M.gguf) | Q3_K_M | 1.82GB | | [Phi-3.5-mini-TitanFusion-0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q3_K_L.gguf) | Q3_K_L | 1.94GB | | [Phi-3.5-mini-TitanFusion-0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.IQ4_XS.gguf) | IQ4_XS | 1.93GB | | [Phi-3.5-mini-TitanFusion-0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q4_0.gguf) | Q4_0 | 2.03GB | | [Phi-3.5-mini-TitanFusion-0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.IQ4_NL.gguf) | IQ4_NL | 2.04GB | | [Phi-3.5-mini-TitanFusion-0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q4_K_S.gguf) | Q4_K_S | 2.04GB | | [Phi-3.5-mini-TitanFusion-0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q4_K.gguf) | Q4_K | 2.23GB | | [Phi-3.5-mini-TitanFusion-0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q4_K_M.gguf) | Q4_K_M | 2.23GB | | [Phi-3.5-mini-TitanFusion-0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q4_1.gguf) | Q4_1 | 2.24GB | | [Phi-3.5-mini-TitanFusion-0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q5_0.gguf) | Q5_0 | 2.46GB | | [Phi-3.5-mini-TitanFusion-0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q5_K_S.gguf) | Q5_K_S | 2.46GB | | [Phi-3.5-mini-TitanFusion-0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q5_K.gguf) | Q5_K | 2.62GB | | [Phi-3.5-mini-TitanFusion-0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q5_K_M.gguf) | Q5_K_M | 2.62GB | | [Phi-3.5-mini-TitanFusion-0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q5_1.gguf) | Q5_1 | 2.68GB | | [Phi-3.5-mini-TitanFusion-0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q6_K.gguf) | Q6_K | 2.92GB | | [Phi-3.5-mini-TitanFusion-0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/bunnycore_-_Phi-3.5-mini-TitanFusion-0.1-gguf/blob/main/Phi-3.5-mini-TitanFusion-0.1.Q8_0.gguf) | Q8_0 | 3.78GB | Original model description: --- library_name: transformers tags: - mergekit - merge base_model: - nbeerbower/phi3.5-gutenberg-4B - ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1 - microsoft/Phi-3.5-mini-instruct model-index: - name: Phi-3.5-mini-TitanFusion-0.1 results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 52.28 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 35.45 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 6.19 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 10.85 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 15.8 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 31.18 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1 name: Open LLM Leaderboard --- This is a merged pre-trained language model created using the TIES merge method. It is based on the microsoft/Phi-3.5-mini-instruct model and incorporates the knowledge and capabilities of the nbeerbower/phi3.5-gutenberg-4B and ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1 models. ## Capabilities: - Roleplay: The model can engage in role-playing scenarios, taking on different personas and responding to prompts in a character-appropriate manner. - Creative Writing: It can assist in creative writing tasks, such as brainstorming ideas, generating plotlines, or developing characters. - Reasoning: The model can reason about information and draw conclusions based on the data it has been trained on. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) as a base. ### Models Merged The following models were included in the merge: * [nbeerbower/phi3.5-gutenberg-4B](https://huggingface.co/nbeerbower/phi3.5-gutenberg-4B) * [ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1 parameters: weight: 1 - model: nbeerbower/phi3.5-gutenberg-4B parameters: weight: 1 merge_method: ties base_model: microsoft/Phi-3.5-mini-instruct parameters: density: 1 normalize: true int8_mask: true dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_bunnycore__Phi-3.5-mini-TitanFusion-0.1) | Metric |Value| |-------------------|----:| |Avg. |25.29| |IFEval (0-Shot) |52.28| |BBH (3-Shot) |35.45| |MATH Lvl 5 (4-Shot)| 6.19| |GPQA (0-shot) |10.85| |MuSR (0-shot) |15.80| |MMLU-PRO (5-shot) |31.18|
atomrom/flan-t5-base-supportcase-classification
atomrom
2024-10-22T20:57:14Z
5
0
transformers
[ "transformers", "safetensors", "t5", "text-classification", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-10-17T20:44:08Z
--- library_name: transformers license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer model-index: - name: flan-t5-base-supportcase-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-supportcase-classification This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
mradermacher/RYS-XLarge-i1-GGUF
mradermacher
2024-10-22T20:26:08Z
32
1
transformers
[ "transformers", "gguf", "en", "base_model:dnhkng/RYS-XLarge", "base_model:quantized:dnhkng/RYS-XLarge", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-22T16:46:43Z
--- base_model: dnhkng/RYS-XLarge language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/dnhkng/RYS-XLarge <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/RYS-XLarge-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-IQ1_S.gguf) | i1-IQ1_S | 24.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-IQ1_M.gguf) | i1-IQ1_M | 25.5 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 27.4 | | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-IQ2_XS.gguf) | i1-IQ2_XS | 29.1 | | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-IQ2_S.gguf) | i1-IQ2_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-IQ2_M.gguf) | i1-IQ2_M | 31.5 | | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-Q2_K.gguf) | i1-Q2_K | 31.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 34.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-IQ3_XS.gguf) | i1-IQ3_XS | 35.2 | | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-Q3_K_S.gguf) | i1-Q3_K_S | 36.9 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-IQ3_S.gguf) | i1-IQ3_S | 37.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-IQ3_M.gguf) | i1-IQ3_M | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-Q3_K_M.gguf) | i1-Q3_K_M | 40.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-Q3_K_L.gguf) | i1-Q3_K_L | 42.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-IQ4_XS.gguf) | i1-IQ4_XS | 42.7 | | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-Q4_0.gguf) | i1-Q4_0 | 44.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-Q4_K_S.gguf) | i1-Q4_K_S | 47.0 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 50.8 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 55.2 | | | [PART 1](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 58.4 | | | [PART 1](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF/resolve/main/RYS-XLarge.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 69.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/RYS-XLarge-GGUF
mradermacher
2024-10-22T20:26:07Z
16
0
transformers
[ "transformers", "gguf", "en", "base_model:dnhkng/RYS-XLarge", "base_model:quantized:dnhkng/RYS-XLarge", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-22T01:23:39Z
--- base_model: dnhkng/RYS-XLarge language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/dnhkng/RYS-XLarge <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/RYS-XLarge-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q2_K.gguf) | Q2_K | 31.9 | | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q3_K_S.gguf) | Q3_K_S | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q3_K_M.gguf) | Q3_K_M | 40.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q3_K_L.gguf) | Q3_K_L | 42.4 | | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.IQ4_XS.gguf) | IQ4_XS | 43.1 | | | [GGUF](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q4_K_S.gguf) | Q4_K_S | 47.0 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q4_K_M.gguf.part2of2) | Q4_K_M | 50.8 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q5_K_S.gguf.part2of2) | Q5_K_S | 55.2 | | | [PART 1](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q5_K_M.gguf.part2of2) | Q5_K_M | 58.4 | | | [PART 1](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q6_K.gguf.part2of2) | Q6_K | 69.1 | very good quality | | [PART 1](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/RYS-XLarge-GGUF/resolve/main/RYS-XLarge.Q8_0.gguf.part2of2) | Q8_0 | 83.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
X3NO/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF
X3NO
2024-10-22T20:12:20Z
6
0
transformers
[ "transformers", "gguf", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "base_model:huihui-ai/Llama-3.2-1B-Instruct-abliterated", "base_model:quantized:huihui-ai/Llama-3.2-1B-Instruct-abliterated", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-22T20:12:11Z
--- library_name: transformers license: llama3.2 base_model: huihui-ai/Llama-3.2-1B-Instruct-abliterated tags: - abliterated - uncensored - llama-cpp - gguf-my-repo --- # X3NO/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF This model was converted to GGUF format from [`huihui-ai/Llama-3.2-1B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Llama-3.2-1B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Llama-3.2-1B-Instruct-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo X3NO/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF --hf-file llama-3.2-1b-instruct-abliterated-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo X3NO/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF --hf-file llama-3.2-1b-instruct-abliterated-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo X3NO/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF --hf-file llama-3.2-1b-instruct-abliterated-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo X3NO/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF --hf-file llama-3.2-1b-instruct-abliterated-q8_0.gguf -c 2048 ```
ampp/Strange-Hyper-Real
ampp
2024-10-22T20:07:22Z
15
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:apache-2.0", "region:us" ]
text-to-image
2024-07-31T19:10:16Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/sdxlTurboUpres_00415_.png - text: '-' output: url: images/sdxlTurboUpres_00419_.png - text: '-' output: url: images/sdxlTurboUpres_00460_.png - text: '-' output: url: images/sdxlTurboUpres_00481_.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: null license: apache-2.0 --- # Hyper-real <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/ampp/Strange-Hyper-Real/tree/main) them in the Files & versions tab.
TrossenRoboticsCommunity/aloha_solo_lego
TrossenRoboticsCommunity
2024-10-22T20:06:31Z
8
0
lerobot
[ "lerobot", "safetensors", "act", "model_hub_mixin", "pytorch_model_hub_mixin", "robotics", "region:us" ]
robotics
2024-10-22T17:28:17Z
--- library_name: lerobot tags: - act - model_hub_mixin - pytorch_model_hub_mixin - robotics --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: https://github.com/huggingface/lerobot - Docs: [More Information Needed]
X3NO/Llama-3.2-3B-Instruct-uncensored-Q8_0-GGUF
X3NO
2024-10-22T19:59:07Z
5
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:chuanli11/Llama-3.2-3B-Instruct-uncensored", "base_model:quantized:chuanli11/Llama-3.2-3B-Instruct-uncensored", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-22T19:58:49Z
--- library_name: transformers tags: - llama-cpp - gguf-my-repo base_model: chuanli11/Llama-3.2-3B-Instruct-uncensored --- # X3NO/Llama-3.2-3B-Instruct-uncensored-Q8_0-GGUF This model was converted to GGUF format from [`chuanli11/Llama-3.2-3B-Instruct-uncensored`](https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo X3NO/Llama-3.2-3B-Instruct-uncensored-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-uncensored-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo X3NO/Llama-3.2-3B-Instruct-uncensored-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-uncensored-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo X3NO/Llama-3.2-3B-Instruct-uncensored-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-uncensored-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo X3NO/Llama-3.2-3B-Instruct-uncensored-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-uncensored-q8_0.gguf -c 2048 ```
Pinguin/Alien_Romulus_Still
Pinguin
2024-10-22T19:55:54Z
10
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-22T19:55:32Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/flux.png - text: '-' output: url: images/flux (2).png - text: '-' output: url: images/flux (3).png - text: '-' output: url: images/flux (1).png base_model: black-forest-labs/FLUX.1-dev instance_prompt: Alien Romulus Still license: other license_name: flux-1-dev-non-commercial-license license_link: LICENSE --- # Alien Romulus Still <Gallery /> ## Model description Reduce strength to less similarity of actors ## Trigger words You should use `Alien Romulus Still` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Pinguin/Alien_Romulus_Still/tree/main) them in the Files & versions tab.
SteelStorage/L3.1-MS-Astoria-70b-v2
SteelStorage
2024-10-22T19:46:34Z
33
8
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.2", "base_model:merge:ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.2", "base_model:NeverSleep/Lumimaid-v0.2-70B", "base_model:merge:NeverSleep/Lumimaid-v0.2-70B", "base_model:Sao10K/L3.1-70B-Euryale-v2.2", "base_model:merge:Sao10K/L3.1-70B-Euryale-v2.2", "base_model:migtissera/Tess-3-Llama-3.1-70B", "base_model:merge:migtissera/Tess-3-Llama-3.1-70B", "base_model:mlabonne/Llama-3.1-70B-Instruct-lorablated", "base_model:merge:mlabonne/Llama-3.1-70B-Instruct-lorablated", "base_model:nbeerbower/Llama3.1-Gutenberg-Doppel-70B", "base_model:merge:nbeerbower/Llama3.1-Gutenberg-Doppel-70B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-21T20:59:01Z
--- base_model: - migtissera/Tess-3-Llama-3.1-70B - nbeerbower/Llama3.1-Gutenberg-Doppel-70B - NeverSleep/Lumimaid-v0.2-70B - mlabonne/Llama-3.1-70B-Instruct-lorablated - ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.2 - Sao10K/L3.1-70B-Euryale-v2.2 library_name: transformers tags: - mergekit - merge --- <!DOCTYPE html> <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #2E3440 0%, #1A202C 100%); color: #D8DEE9; margin: 0; padding: 0; font-size: 16px; } .container { width: 80% auto; max-width: 1080px auto; margin: 20px auto; background-color: rgba(255, 255, 255, 0.02); padding: 20px; border-radius: 12px; box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2); backdrop-filter: blur(10px); border: 1px solid rgba(255, 255, 255, 0.1); } .header h1 { font-size: 28px; color: #ECEFF4; margin: 0 0 20px 0; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3); } .update-section { margin-top: 30px; } .update-section h2 { font-size: 24px; color: #88C0D0; } .update-section p { font-size: 16px; line-height: 1.6; color: #ECEFF4; } .info img { width: 100%; border-radius: 10px; margin-bottom: 15px; } a { color: #88C0D0; text-decoration: none; } a:hover { color: #A3BE8C; } .button { display: inline-block; background-color: #5E81AC; color: #E5E9F0; padding: 10px 20px; border-radius: 5px; cursor: pointer; text-decoration: none; } .button:hover { background-color: #81A1C1; } pre { background-color: #2E3440; padding: 10px; border-radius: 5px; overflow-x: auto; } code { font-family: 'Courier New', monospace; color: #D8DEE9; } </style> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>L3.1-MS-Astoria-70b-v2 Data Card</title> <link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet"> </head> <body> <div class="container"> <div class="header"> <h1>L3.1-MS-Astoria-70b-v2</h1> </div> <div class="info"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/C-ndfxAGdf21DjchZcf2p.png"> <p>Now the cute anime girl has your attention</p> <p><strong>Creator:</strong> <a href="https://huggingface.co/Steelskull" target="_blank">SteelSkull</a></p> <h1>About Astoria-70b-v2:</h1> <pre><code>Name Legend: L3.1 = Llama 3.1 MS = Model Stock 70B = its 70B </code></pre> <p>This model is a remake of the original astoria with modern models and context sizes its goal is to merge the robust storytelling of mutiple models while attempting to maintain intelligence.</p> <p>Use Llama 3 Format or meth format (llama 3 refuses to work with stepped thinking but meth works)</p> <h2>Quants: (List of badasses)</h2> <p>GGUF Quant: </p> <p> - bartowski: <a href="https://huggingface.co/bartowski/L3.1-MS-Astoria-70b-v2-GGUF" target="_blank"> Combined-GGUF </a></p> <p> - mradermacher: <a href="https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-GGUF" target="_blank"> GGUF </a>// <a href="https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF" target="_blank"> Imat-GGUF </a></p> <h3>Config:</h3> <pre><code>MODEL_NAME = "L3.1-MS-Astoria-70b-v2" base_model: mlabonne/Llama-3.1-70B-Instruct-lorablated merge_method: model_stock dtype: bfloat16 models: - model: migtissera/Tess-3-Llama-3.1-70B - model: NeverSleep/Lumimaid-v0.2-70B - model: Sao10K/L3.1-70B-Euryale-v2.2 - model: ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.2 - model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B </code></pre> <p><strong>If you wish to support:</strong></p> </div> <div class="donation-section"> <a href="https://ko-fi.com/Y8Y0AO2XE" target="_blank"> <img height="36" style="border:0px;height:36px;" src="https://storage.ko-fi.com/cdn/kofi2.png?v=3" border="0" alt="Buy Me a Coffee at ko-fi.com" /> </a> </div> </div> </div> </body> </html>
X3NO/Llama-3.2-3B-Instruct-uncensored-Q5_K_M-GGUF
X3NO
2024-10-22T19:37:17Z
6
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:chuanli11/Llama-3.2-3B-Instruct-uncensored", "base_model:quantized:chuanli11/Llama-3.2-3B-Instruct-uncensored", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-22T19:37:04Z
--- library_name: transformers tags: - llama-cpp - gguf-my-repo base_model: chuanli11/Llama-3.2-3B-Instruct-uncensored --- # X3NO/Llama-3.2-3B-Instruct-uncensored-Q5_K_M-GGUF This model was converted to GGUF format from [`chuanli11/Llama-3.2-3B-Instruct-uncensored`](https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo X3NO/Llama-3.2-3B-Instruct-uncensored-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-uncensored-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo X3NO/Llama-3.2-3B-Instruct-uncensored-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-uncensored-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo X3NO/Llama-3.2-3B-Instruct-uncensored-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-uncensored-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo X3NO/Llama-3.2-3B-Instruct-uncensored-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-uncensored-q5_k_m.gguf -c 2048 ```
appvoid/arco-2-reasoning-20k
appvoid
2024-10-22T19:30:56Z
14
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:appvoid/arco-2", "base_model:finetune:appvoid/arco-2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T17:16:28Z
--- base_model: appvoid/arco-2 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** appvoid - **License:** apache-2.0 - **Finetuned from model :** appvoid/arco-2 fine-tuned on 20k samples from KingNish reasoning dataset format is like the template in the dataset: ``` <|im_start|>user What is javascript? <|im_end|> <|im_start|>reasoning I can answer directly to this question. <|im_end|> <|im_start|>assistant That's a great question! JavaScript is a programming language that is commonly used to create interactive web pages and applications. It's known for its ability to create dynamic, interactive content and it's used by many web developers to create web pages, mobile applications, and more. ``` ![screenshot](https://huggingface.co/appvoid/arco-2-reasoning-20k/resolve/main/Screen%20Shot%202024-10-22%20at%203.08.18%20PM.png?download=true)
vocabtrimmer/camembert-base.xnli-fr.6
vocabtrimmer
2024-10-22T19:30:28Z
6
0
null
[ "safetensors", "camembert", "region:us" ]
null
2024-10-22T19:30:12Z
# `vocabtrimmer/camembert-base.xnli-fr.6` This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the [xnli](https://huggingface.co/datasets/xnli) (fr). Following metrics are computed on the `test` split of [xnli](https://huggingface.co/datasets/xnli)(fr). * Evaluation on test split | | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy | |---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:| | 0 | 80.14 | 80.14 | 80.14 | 80.15 | 80.14 | 80.57 | 80.14 | * Evaluation on validation split | | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy | |---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:| | 0 | 79.6 | 79.6 | 79.6 | 79.64 | 79.6 | 80.32 | 79.6 | Check the result file [here](https://huggingface.co/vocabtrimmer/camembert-base.xnli-fr.6/raw/main/eval.json).
speakleash/Bielik-11B-v2.2-Instruct-GGUF
speakleash
2024-10-22T19:09:26Z
222
18
transformers
[ "transformers", "gguf", "mistral", "text-generation", "finetuned", "pl", "base_model:speakleash/Bielik-11B-v2.2-Instruct", "base_model:quantized:speakleash/Bielik-11B-v2.2-Instruct", "license:apache-2.0", "autotrain_compatible", "region:us", "conversational" ]
text-generation
2024-08-26T08:13:07Z
--- language: - pl license: apache-2.0 library_name: transformers tags: - finetuned - gguf inference: false pipeline_tag: text-generation base_model: speakleash/Bielik-11B-v2.2-Instruct --- <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-GGUF/raw/main/speakleash_cyfronet.png"> </p> # Bielik-11B-v2.2-Instruct-GGUF This repo contains GGUF format model files for [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v.2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct). <b><u>DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!</u></b><br> ### Available quantization formats: * **q4_k_m:** Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * **q5_k_m:** Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * **q6_k:** Uses Q8_K for all tensors * **q8_0:** Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. ### Ollama Modfile The GGUF file can be used with [Ollama](https://ollama.com/). To do this, you need to import the model using the configuration defined in the Modfile. For model eg. Bielik-11B-v2.2-Instruct.Q4_K_M.gguf (full path to model location) Modfile looks like: ``` FROM ./Bielik-11B-v2.2-Instruct.Q4_K_M.gguf TEMPLATE """<s>{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>""" PARAMETER stop "<|start_header_id|>" PARAMETER stop "<|end_header_id|>" PARAMETER stop "<|eot_id|>" # Remeber to set low temperature for experimental models (1-3bits) PARAMETER temperature 0.1 ``` ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Quant from:** [Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct) * **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2) * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/) ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows, macOS (Silicon) and Linux, with GPU acceleration * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note ctransformers has not been updated in a long time and does not support many recent models. ### Responsible for model quantization * [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery. ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4).
Primeness/MoreDeezNutzz
Primeness
2024-10-22T18:52:11Z
35
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T17:46:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gitgato/speech-ts
gitgato
2024-10-22T18:42:34Z
12
1
espnet
[ "espnet", "speecht5", "audio", "text-to-speech", "dataset:ovieyra21/mabama-v8", "dataset:ovieyra21/mabama-v9", "arxiv:2110.07205", "arxiv:1910.09700", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "region:us" ]
text-to-speech
2024-10-22T18:19:36Z
--- license: mit tags: - audio - text-to-speech datasets: - ovieyra21/mabama-v8 - ovieyra21/mabama-v9 base_model: - microsoft/speecht5_tts pipeline_tag: text-to-speech metrics: - charcut_mt new_version: openai/whisper-large-v3-turbo library_name: espnet --- # SpeechT5 (TTS task) SpeechT5 model fine-tuned for speech synthesis (text-to-speech) on LibriTTS. This model was introduced in [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. SpeechT5 was first released in [this repository](https://github.com/microsoft/SpeechT5/), [original weights](https://huggingface.co/mechanicalsea/speecht5-tts). The license used is [MIT](https://github.com/microsoft/SpeechT5/blob/main/LICENSE). ## Model Description Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification. - **Developed by:** Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. - **Shared by [optional]:** [Matthijs Hollemans](https://huggingface.co/Matthijs) - **Model type:** text-to-speech - **Language(s) (NLP):** [More Information Needed] - **License:** [MIT](https://github.com/microsoft/SpeechT5/blob/main/LICENSE) - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [https://github.com/microsoft/SpeechT5/] - **Paper:** [https://arxiv.org/pdf/2110.07205.pdf] - **Blog Post:** [https://huggingface.co/blog/speecht5] - **Demo:** [https://huggingface.co/spaces/Matthijs/speecht5-tts-demo] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## 🤗 Transformers Usage You can run SpeechT5 TTS locally with the 🤗 Transformers library. 1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers), sentencepiece, soundfile and datasets(optional): ``` pip install --upgrade pip pip install --upgrade transformers sentencepiece datasets[audio] ``` 2. Run inference via the `Text-to-Speech` (TTS) pipeline. You can access the SpeechT5 model via the TTS pipeline in just a few lines of code! ```python from transformers import pipeline from datasets import load_dataset import soundfile as sf synthesiser = pipeline("text-to-speech", "microsoft/speecht5_tts") embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation") speaker_embedding = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0) # You can replace this embedding with your own as well. speech = synthesiser("Hello, my dog is cooler than you!", forward_params={"speaker_embeddings": speaker_embedding}) sf.write("speech.wav", speech["audio"], samplerate=speech["sampling_rate"]) ``` 3. Run inference via the Transformers modelling code - You can use the processor + generate code to convert text into a mono 16 kHz speech waveform for more fine-grained control. ```python from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan from datasets import load_dataset import torch import soundfile as sf from datasets import load_dataset processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts") model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts") vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan") inputs = processor(text="Hello, my dog is cute.", return_tensors="pt") # load xvector containing speaker's voice characteristics from a dataset embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation") speaker_embeddings = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0) speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder) sf.write("speech.wav", speech.numpy(), samplerate=16000) ``` ### Fine-tuning the Model Refer to [this Colab notebook](https://colab.research.google.com/drive/1i7I5pzBcU3WDFarDnzweIj4-sVVoIUFJ) for an example of how to fine-tune SpeechT5 for TTS on a different dataset or a new language. ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> You can use this model for speech synthesis. See the [model hub](https://huggingface.co/models?search=speecht5) to look for fine-tuned versions on a task that interests you. ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> LibriTTS ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [optional] Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. ### Training hyperparameters - **Precision:** [More Information Needed] <!--fp16, bf16, fp8, fp32 --> - **Regime:** [More Information Needed] <!--mixed precision or not --> ### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification. # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ```bibtex @inproceedings{ao-etal-2022-speecht5, title = {{S}peech{T}5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing}, author = {Ao, Junyi and Wang, Rui and Zhou, Long and Wang, Chengyi and Ren, Shuo and Wu, Yu and Liu, Shujie and Ko, Tom and Li, Qing and Zhang, Yu and Wei, Zhihua and Qian, Yao and Li, Jinyu and Wei, Furu}, booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, month = {May}, year = {2022}, pages={5723--5738}, } ``` # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> - **text-to-speech** to synthesize audio # More Information [optional] [More Information Needed] # Model Card Authors [optional] Disclaimer: The team releasing SpeechT5 did not write a model card for this model so this model card has been written by the Hugging Face team. # Model Card Contact [More Information Needed]
DavyMorgan/tiny-controlnet-sd35
DavyMorgan
2024-10-22T18:22:04Z
496
0
diffusers
[ "diffusers", "safetensors", "region:us" ]
null
2024-10-22T18:17:32Z
--- library_name: diffusers --- Pipeline generated with ```python import torch from diffusers.models import SD3ControlNetModel def get_dummy_components_controlnet(): torch.manual_seed(0) controlnet = SD3ControlNetModel( sample_size=32, patch_size=1, in_channels=8, num_layers=1, attention_head_dim=8, num_attention_heads=4, joint_attention_dim=32, caption_projection_dim=32, pooled_projection_dim=64, out_channels=8, qk_norm="rms_norm", dual_attention_layers=(0,), ) return controlnet if __name__ == "__main__": controlnet = get_dummy_components_controlnet() controlnet.push_to_hub("DavyMorgan/tiny-controlnet-sd35") ```
vocabtrimmer/chinese-roberta-wwm-ext.xnli-zh.7
vocabtrimmer
2024-10-22T18:08:33Z
7
0
null
[ "safetensors", "bert", "region:us" ]
null
2024-10-22T18:08:20Z
# `vocabtrimmer/chinese-roberta-wwm-ext.xnli-zh.7` This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on the [xnli](https://huggingface.co/datasets/xnli) (zh). Following metrics are computed on the `test` split of [xnli](https://huggingface.co/datasets/xnli)(zh). * Evaluation on test split | | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy | |---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:| | 0 | 33.33 | 33.33 | 33.33 | 16.67 | 33.33 | 11.11 | 33.33 | * Evaluation on validation split | | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy | |---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:| | 0 | 33.33 | 33.33 | 33.33 | 16.67 | 33.33 | 11.11 | 33.33 | Check the result file [here](https://huggingface.co/vocabtrimmer/chinese-roberta-wwm-ext.xnli-zh.7/raw/main/eval.json).
MHGanainy/gpt2-xl-lora-multi-512-k5-9-im-3
MHGanainy
2024-10-22T18:02:41Z
11
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:openai-community/gpt2-xl", "base_model:adapter:openai-community/gpt2-xl", "license:mit", "region:us" ]
null
2024-10-22T15:00:08Z
--- library_name: peft license: mit base_model: openai-community/gpt2-xl tags: - generated_from_trainer model-index: - name: gpt2-xl-lora-multi-512-k5-9-im-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-xl-lora-multi-512-k5-9-im-3 This model is a fine-tuned version of [openai-community/gpt2-xl](https://huggingface.co/openai-community/gpt2-xl) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3768 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - lr_scheduler_warmup_steps: 9453 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.45.2 - Pytorch 2.1.0a0+32f93b1 - Datasets 3.0.2 - Tokenizers 0.20.1
kpyu/eilev-blip2-flan-t5-xl
kpyu
2024-10-22T18:02:29Z
29
1
transformers
[ "transformers", "safetensors", "blip-2", "text2text-generation", "video-to-text", "video-captioning", "image-to-text", "image-captioning", "visual-question-answering", "en", "arxiv:2311.17041", "arxiv:2210.11416", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-to-text
2023-11-28T19:11:56Z
--- license: mit language: - en library_name: transformers pipeline_tag: image-to-text tags: - video-to-text - video-captioning - image-to-text - image-captioning - visual-question-answering - blip-2 --- # Model Card for EILEV BLIP-2-Flan-T5-xl ![Teaser](teaser.png) [Salesforce/blip2-flan-t5-xl](https://huggingface.co/Salesforce/blip2-flan-t5-xl) trained using [EILeV](https://github.com/yukw777/EILEV), a novel training method that can elicit in-context learning in vision-language models (VLMs) for videos without requiring massive, naturalistic video datasets. ## Model Details ### Model Description EILEV BLIP-2-Flan-T5-xl is a VLM optimized for egocentric video. It can perform in-context learning over videos and texts. It was trained on Ego4D. ### Model Sources - **Repository:** https://github.com/yukw777/EILEV - **Paper:** https://arxiv.org/abs/2311.17041 - **Demo:** https://2e09-141-212-106-177.ngrok-free.app ## Bias, Risks, and Limitations EILEV BLIP-2-OPT-2.7B uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. EILEV BLIP-2-OPT-2.7B has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ## How to Get Started with the Model Please check out the official repository: https://github.com/yukw777/EILEV
jmercat/act_koch_tri_mouse_1
jmercat
2024-10-22T17:56:22Z
18
0
lerobot
[ "lerobot", "safetensors", "act", "model_hub_mixin", "pytorch_model_hub_mixin", "robotics", "region:us" ]
robotics
2024-10-22T17:47:39Z
--- library_name: lerobot tags: - act - model_hub_mixin - pytorch_model_hub_mixin - robotics --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: https://github.com/huggingface/lerobot - Docs: [More Information Needed]
AppyFizz/calabs-woman
AppyFizz
2024-10-22T17:53:28Z
28
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-10-22T17:51:23Z
--- license: creativeml-openrail-m tags: - text-to-image --- ### calabs woman on Stable Diffusion via Dreambooth #### model by AppyFizz This your the Stable Diffusion model fine-tuned the calabs woman concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **calabs woman** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
BroAlanTaps/GPT2-large-64-54000steps
BroAlanTaps
2024-10-22T17:43:26Z
118
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T17:41:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
faodl/Livestock-distilbert-base-multilingual-cased-f1_weighted
faodl
2024-10-22T17:31:47Z
106
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-22T17:31:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
exala/db_mc2_8.5
exala
2024-10-22T17:31:11Z
105
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-22T17:30:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NineZeroShine/ClashR-flux-lora
NineZeroShine
2024-10-22T17:29:29Z
41
3
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:bigcode-openrail-m", "region:us" ]
text-to-image
2024-10-22T17:27:37Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- ClashR style, 3d icon of a hand raising a test tube, surrounded by bubbling formulas on a laboratory-themed backdrop output: url: images/hand.png - text: >- ClashR style, ClashR style, 3d icon of a mortar board hat transforming into a flying book, soaring through a cloudy sky backdrop output: url: images/book.png - text: ClashR style, colorful lights, portrait wolverine hero on the street output: url: images/wolv.png - text: >- ClashR style, featuring a woman with a ethereal black and green mist surrounding her body. The mist creates a dramatic and surreal effect. A black flower, that is blooming, covering one eye, attached to the face. The woman has a cold aesthetic, complimented by her white hair. woman draped in shadows with a lighting that enhances her unique features. The background is dark green, filled with black rose petals. Holding finger to lips, shush, shushing motion output: url: images/secret.png - text: >- ClashR style, a noir-style with art deco influences, the color palette is predominantly shades of dark red and teal with black and gold, the haunting and atmospheric art deco hotel bar of hotel cortez, the dimly lit space is filled with the glow of antique chandeliers, brass fixtures, and polished black marble surfaces flickering with ghostly light, in the foreground, a skeletal barkeeper stands behind an elegantly curved bar made of polished black marble and brass accents, art deco motifs of sleek geometric patterns and detailed lines adorn the bar and walls, in the background, a towering backlit mirror reflects bottles of glowing spirits lined in eerie rows, the skeletal barkeeper, a reanimated figure in an elegant suit, is mid-motion, pouring a ghostly otherworldly glowing cocktail from a sleek, polished shaker into a cocktail glass resting on the bar, his hollow eyes glow with a faint, eerie green fire as he serves the drink, he wears immaculate white leather gloves, gripping the shaker with precise, practiced movements, the skull is faintly illuminated by the soft glow of gas lamps, a sinister yet polite grin stretches across his bony face, enhancing his unsettling charm, as he pours the drink, wisps of ethereal vapor curl up from the glass, adding to the haunting atmosphere, the image is a medium shot from eye level, placing the viewer in front of the barkeeper, the polished bar surface reflects the faint light, adding depth and dimension, the art deco details frame the skeletal figure, emphasizing his central role, soft, flickering golden light from the antique chandeliers casts dappled light across the room, the background bottles emit an ethereal glow, illuminated particles of dust float lazily in the air, adding to the eerie, forgotten atmosphere, subtle highlights glint off the polished surfaces, shot on 35mm film, capturing the intricate textures of bone, intricate fabric, and marble with rich, cinematic detail, warm light from the flickering gas lamps lends an eerie yet elegant feel, while the ghostly vapor from the cocktail enhances the unsettling charm output: url: images/skull.png - text: >- ClashR style, This is a digital illustration in a fantasy ClashR style, featuring a young woman with a surreal, ethereal quality. The artwork is rendered in a hyper-realistic, dreamlike style, blending elements of fantasy and nature. The subject is a beautiful girl sitting in a giant beehive, hand in the honey, she has freckles, white hair, long braids, pale skin, skinny, striped thigh high stockings, black and yellow stripes, bee costume, antennae, satisfied expression, she is surrounded by honeycombs dripping with honey, humanoid bee soldiers are guarding the honey output: url: images/bzzz.png - text: >- ClashR style, An old wizard with a long white beard, wearing a classic pointed wizard hat, sits comfortably in a wooden chair on top of his stone wizard tower. He leisurely smokes a pipe, with magical smoke swirling in the air around him, creating faint glowing shapes. The tower’s top is high above the clouds, and a vast sky stretches out in the background, giving the scene a tranquil, isolated vibe. At the top of the image, large whimsical text reads: "I'm umar generating!" Further down, near the bottom-middle of the image, text in a similar whimsical style reads: "I'm straight up not working anymore!" The text is a key focal point, blending humorously into the laid-back, magical atmosphere output: url: images/grandpa.png - text: >- ClashR style, featuring a terminator t800. The terminator is adorned with intricate, metallic details, including bolts, wires, and mechanical components, giving it a highly advanced, industrial look output: url: images/t800.png - text: ClashR style, colorful lights, Barack Obama in tuxedo, festival output: url: images/barak.png - text: ClashR style, colorful lights, Donald Trump in tuxedo, festival output: url: images/trump.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: t0yb0x, simple toy design, detailed toy design, 3D render license: bigcode-openrail-m language: - en --- # ClashR Flux <Gallery /> ## Model description 99 images, 1 repeat, 100 epochs, 9900 Steps FOR THE BEST RESULTS USE EULER SAMPLER This LoRA works best with icons, objects and human subjects. In v2 I plan to train on more generated outputs intermixed with the pre-existing outputs to create a stronger adherence to this style. Trigger keyword: &#39;ClashR style&#39; &#39;colorful lights&#39; Recommended strengths: 1 - 1.5 ## Trigger words You should use `ClashR style` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/NineZeroShine/ClashR-flux-lora/tree/main) them in the Files & versions tab.
NamanVikram/tiny-llama-tool-specification
NamanVikram
2024-10-22T17:28:40Z
7
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
null
2024-10-22T17:27:28Z
--- base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
alohia/bertwitch_sentiment_analysis
alohia
2024-10-22T17:27:19Z
5
0
null
[ "tensorboard", "safetensors", "roberta", "autotrain", "text-classification", "base_model:finiteautomata/bertweet-base-sentiment-analysis", "base_model:finetune:finiteautomata/bertweet-base-sentiment-analysis", "doi:10.57967/hf/3307", "region:us" ]
text-classification
2024-10-22T17:11:50Z
--- tags: - autotrain - text-classification base_model: finiteautomata/bertweet-base-sentiment-analysis widget: - text: "I love AutoTrain" --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.021347366273403168 f1_macro: 0.9936121048770471 f1_micro: 0.9955175434076975 f1_weighted: 0.9955108152465474 precision_macro: 0.9946094746601329 precision_micro: 0.9955175434076975 precision_weighted: 0.9955142956474508 recall_macro: 0.9926308276207328 recall_micro: 0.9955175434076975 recall_weighted: 0.9955175434076975 accuracy: 0.9955175434076975
pufanyi/SC4000_ConvNeXtV2_base_balanced_12500
pufanyi
2024-10-22T17:21:09Z
190
0
transformers
[ "transformers", "safetensors", "convnextv2", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-10-22T17:20:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
angie-chen55/pythia2.8b-dpo-hh-checkpoint-11000
angie-chen55
2024-10-22T17:19:10Z
5
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T17:16:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LLM360/CrystalChat-7B-Web2Code
LLM360
2024-10-22T17:03:42Z
20
3
transformers
[ "transformers", "pytorch", "llava_crystal", "text-generation", "nlp", "llm", "mllm", "custom_code", "en", "dataset:MBZUAI/Web2Code", "arxiv:2406.20098", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-26T17:53:44Z
--- license: mit language: - en pipeline_tag: text-generation library_name: transformers tags: - nlp - llm - mllm datasets: - MBZUAI/Web2Code --- # CrystalChat-7B-Web2Code: a fully-reproducible vision large language model based on CrystalChat-7B LLM for webpage code generation ## Model Description CrystalChat-7B based multi-modal large language model (MLLM) mimics the training recipe used for Vicuna-7B based [LLaVa-v1.5](https://huggingface.co/docs/transformers/main/model_doc/llava). CrystalChat-7B based MLLMs models are entirely transparent, having open-sourced all materials, including code, data, model checkpoint, intermediate results, and more at [Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs](https://arxiv.org/pdf/2406.20098). CrystalChat-7B-Web2Code MLLM is specialized in webpage images-to-html code generation. ## CrystalChat-Web2Code Features **Covert hand-drawn images to a website** | ![Image 1](images2/handdrawn.png) | ![Image 2](images2/crystal.png) | |:----------------------:|:----------------------:| | Hand Drawn Webpage | CrystalChat-Web2Code Rendering | **Recreate a new webpage from an existing webpage** Image 1: Original Webpage <center><img src="images2/ori.png" alt="k2 eval table" /></center> Image 2: CrystalChat-Web2Code Rendering <center><img src="images2/crystalchat.png" alt="k2 eval table" /></center> ## Web2Code Dataset Our Web2Code instruction tuning dataset construction and instruction generation process involves four key components: 1. Creation of new webpage image-code pair data **(DWCG)**: We generated high-quality HTML webpage-code pairs following the CodeAlpaca prompt using GPT-3.5 and convert them into instruction-following data. 2. Refinement of existing webpage code generation data **(DWCG<sub>R</sub>)**: We transform existing datasets including WebSight and Pix2Code into an instruction- following data format similar to LLaVA data, so they can be used as instruction-following data to train MLLMs. 3. Creation of a new text question-answer pair data **(DWU)**: We generated a new question-answer pair dataset utilizing our new GPT-3.5 generated data for webpage understanding. 4. Refinement of existing webpage understanding data **(DWU<sub>R</sub>)** : We refine the WebSRC question-answer data to improve its quality using the GPT-4. The Web2Code instruction tuning dataset was released in [Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs](https://huggingface.co/datasets/MBZUAI/Web2Code). ## Evaluations ## Webpage Understanding Benchmark (WUB) ### Results | LLM Backbone | DWCG | DWU | DWCG<sub>R</sub> | DWU<sub>R</sub> | Accuracy (%) | |------------------------|------|-----|------------------|------------------|--------------| | **CrystalChat-7B** | | | | | 73.94 | | | ✓ | ✓ | | | 73.48 | | | ✓ | ✓ | ✓ | ✓ | **74.14** | | **Vicuna-7B** | | | | | 71.12 | | | ✓ | | | | 68.11 | | | | ✓ | | | 70.82 | | | ✓ | ✓ | ✓ | ✓ | **71.23** | | **Llama3-8B** | ✓ | ✓ | ✓ | ✓ | **74.84** | **Table 1:** The accuracy of webpage understanding under various data configurations and LLM backbones. All models are instruction-tuned and evaluated on our WUB benchmark. We note that the general domain data (i.e., LLaVA) is included in all data configuration as default. ## Webpage Code Generation Benchmark (WCGB) Utilizing the same images as the WUB, this benchmark evaluates a multimodal model tasked with generating HTML code from webpage images based on specific instructions. Unlike traditionalcode-level evaluations, this benchmark assesses the generated webpage’s fidelity at the image level. We convert the predicted HTML codes back into images using Selenium WebDriver to allow a direct visual comparison with the ground truth images. The evaluation, depicted on the left side of Figure 6, considers 10 different aspects, which are further categorized into four evaluation matrices using the GPT-4 Vision API. ### Results | LLM Backbone | DWCG | DWU | DWCG<sub>R</sub> | DWU<sub>R</sub> | VSA ↑ | CAD ↑ | TCC ↑ | UII ↑ | Overall ↑ | |------------------------|------|-----|------------------|------------------|--------|--------|--------|--------|------------| | **CrystalChat-7B** | | | | | 4.714 | 4.572 | 4.865 | 5.147 | 4.825 | | | ✓ | | | | 7.900 | 8.001 | 8.204 | 8.215 | 8.080 | | | ✓ | ✓ | | | 7.900 | 8.001 | 8.204 | 8.215 | 8.080 | | | ✓ | ✓ | ✓ | ✓ | **8.384** | **8.287** | **8.417** | **8.488** | **8.394** | | **Vicuna-7B** | | | | | 3.042 | 3.250 | 3.333 | 3.167 | 3.198 | | | ✓ | | | | 6.871 | 6.660 | 6.589 | 6.897 | 6.754 | | | | ✓ | | | 3.898 | 3.489 | 3.340 | 3.651 | 3.595 | | | ✓ | ✓ | ✓ | ✓ | **7.876** | **7.687** | **7.267** | **7.563** | **7.598** | | **Llama3-8B** | ✓ | ✓ | ✓ | ✓ | **8.522** | **8.564** | **8.421** | **8.611** | **8.530** | **Table 2:** The performance of different LLM backbones under various data configurations on our Webpage Code Generation Benchmark (WCGB). "VSA" denotes Visual Structure and Alignment, "CAD" represents Color and Aesthetic Design, "TCC" represents Textual and Content Consistency, and "UII" denotes User Interface and Interactivity. ### About CrystalChat-7B-Web2Code: * 7 billion parameter LLM * CLIP ViT-L/14-336px vision encoder * Languages: English * Models Released: CrystalChat-7B-Web2Code * Trained in 2 stages * License: MIT ## General Evaluations General Evaluation Metrics for MLLMs. MME serves as an extensive evaluative benchmark, aiming to assess perceptual and cognitive capability of MLLMs within 14 sub-tasks. Additionally, we also evaluate the performance of our models on text-oriented visual question answering tasks employing a diverse set of benchmark datasets including ScienceQA and TextVQA. Furthermore, we assess our models’ ability toward anti-hallucination through POPE. | LLM Backbone | MME-P | MME-C | POPE | SciQA | TextVQA | |-----------------------------------|---------|--------|-------|--------|---------| | CrystalCoder-7B | 1359.83 | 238.92 | 86.182 | 64.15 | 50.39 | | CrystalChat-7B | 1456.53 | **308.21** | 86.96 | 67.77 | **57.84** | | Vicuna-7B | **1481.12** | 302.85 | **87.174** | **67.97** | 56.49 | **Table 3:** Comparison of different LLM backbones on visual language understanding benchmarks. All models are instruction-tuned on the general domain data (i.e. LLaVA)* ## Data and Training Details ### Pretrain Data LLaVA Visual Instruct Pretrain LCS-558K is a filtered subset of the LAION, CC, and SBU datasets, featuring a more balanced distribution of concept coverage. The file includes multimodal synthesized conversations generated from image-caption pairs by incorporating randomly selected instructions such as "Describe this image." It is used for pretraining in LLaVA, with the raw CC-3M caption serving as the default answer. ### Finetune Data The finetuning data contains the following: #### LLaVa Finetuning Data The dataset chosen was created by LLaVA with academic-task-oriented VQA data mixture and data from ShareGPT. LLaVA Visual Instruct 150K is a dataset of GPT-generated multimodal instruction-following data. It is designed for visual instruction tuning and aims to develop large multimodal models with capabilities akin to GPT-4 in both vision and language. <!-- The full data sequence can be found [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) --> | Data | Size | Response formatting prompts | |---------------|------|--------------------------------------------------------------------------| | LLaVA [36] | 158K | – | | ShareGPT [46] | 40K | – | | VQAv2 [19] | 83K | Answer the question using a single word or phrase. | | GQA [21] | 72K | Answer the question using a single word or phrase. | | OKVQA [41] | 9K | Answer the question using a single word or phrase. | | OCRVQA [42] | 80K | Answer the question using a single word or phrase. | | A-OKVQA [45] | 66K | Answer with the option’s letter from the given choices directly. | | TextCaps [47] | 22K | Provide a one-sentence caption for the provided image. | | RefCOCO [24, 40] | 48K | Note: randomly choose between the two formats. Provide a short description for this region. | | VG [25] | 86K | Provide the bounding box coordinate of the region this sentence describes. | | **Total** | **665K** | | **Table 4:** Instruction-following Data Mixture of LLaVA-1.5.* ### Code Datasets | Dataset | DWCG (ours) | DWCG<sub>R</sub> (ours) | |---------|-------------|-------------------| | **Instruction** | ✓ | ✓ | | **Source** | Synthetic | Synthetic | | **Size** | 60K | 824.7K | | **Avg Length (tokens)** | 471.8±162.3 | 652.85±157.0 | | **Avg Tag Count** | 28.1±10.6 | 35.3±9.0 | | **Avg DOM Depth** | 5.3±1.0 | 6.5±1.0 | | **Avg Unique Tags** | 13.6±2.7 | 13.5±2.5 | **Table 5:** DWCG is a newly generated GPT-3.5-based dataset, while DWCG<sub>R</sub> is the refined dataset that utilizes WebSight and Pix2Code datasets* ### Webpage Understanding Datasets | Dataset | DWU | DWU<sub>R</sub> | |---------------|---------|-----------------| | **Instruction** | ✓ | ✓ | | **Size** | 243.5K | 51.5K | **Table 6:** Distribution of DWU and DWU<sub>R</sub> datasets. Both datasets include high-quality question-answer pairs for webpage understanding.* ## Loading Crystal ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "LLM360/CrystalChat-7B-MLLM", padding_side="right", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( "LLM360/CrystalChat-7B-MLLM", trust_remote_code=True, torch_dtype=torch.float16, device_map='auto', low_cpu_mem_usage=True ) ``` ## LLM360 LLM-360 is an open research lab enabling community-owned AGI through open-source large model research and development. Crystal-based Models enables community-owned AGI by creating standards and tools to advance the bleeding edge of LLM capability and empower knowledge transfer, research, and development. We believe in a future where artificial general intelligence (AGI) is created by the community, for the community. Through an open ecosystem of equitable computational resources, high-quality data, and flowing technical knowledge, we can ensure ethical AGI development and universal access for all innovators. [Visit us](https://www.llm360.ai/) ## Citation **BibTeX:** ```bibtex @article{yun2024web2code, title={Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs}, author={Yun, Sukmin and Lin, Haokun and Thushara, Rusiru and Bhat, Mohammad Qazim and Wang, Yongxin and Jiang, Zutao and Deng, Mingkai and Wang, Jinhong and Tao, Tianhua and Li, Junbo and others}, journal={arXiv preprint arXiv:2406.20098}, year={2024} } ```
rizki-syazali/tapasid_finetuned_itqa
rizki-syazali
2024-10-22T17:03:16Z
15
0
transformers
[ "transformers", "safetensors", "tapas", "table-question-answering", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
table-question-answering
2024-10-03T08:26:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
faodl/livestock-bert-base-multilingual-uncased-f1_weighted
faodl
2024-10-22T16:52:34Z
105
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-22T11:45:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
trufae/granite3-instruct-8b-gguf
trufae
2024-10-22T16:50:40Z
5
1
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-22T08:34:20Z
--- license: apache-2.0 --- # Granite3 GGUF Source: [ibm-granite/granite-3.0-8b-instruct](https://huggingface.co/ibm-granite/granite-3.0-8b-instruct) Setup: ```bash git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp python -m venv venv . venv/bin/activate pip install . ``` Download the tensors: ```bash from huggingface_hub import snapshot_download model_id="ibm-granite/granite-3.0-8b-instruct" snapshot_download(repo_id=model_id, local_dir="granite3", local_dir_use_symlinks=False, revision="main") ``` Build the gguf: ```bash python convert_hf_to_gguf.py --outfile granite3.gguf --outtype q8_0 granite3 ``` Test with r2ai ```bash r2ai -m "$PWD/granite3.gguf" ```
llmware/bonchon
llmware
2024-10-22T16:42:10Z
44
3
transformers
[ "transformers", "gguf", "llama", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2023-12-16T18:06:21Z
--- license: apache-2.0 --- This repository includes some of our favorite **bonchon** ('side dishes' in Korean). We currently include in this repository several of our favorite GGUF files from TheBloke, including four of our favorite 7B chat models, in Q4_K_M GGUF files. This repository is Public, but intended primarily for use in conjunction with other llmware models, datasets and libraries. Please note specific licensing information and reference for files included in the repository: 1. **HuggingFaceH4/Zephyr-7B-GGUF** - MIT License - [original repository link](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) 2. **Teknium/OpenHermes-2.5-Mistral-7B-GGUF** - Apache 2.0 License - [original repository link](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) 3. **Llama2-Chat-7B-GGUF** - [Llama2 License](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/blob/main/LICENSE.txt) - [original repository link](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) 4. **Starling-7B-GGUF** - CC-BY-NC-4.0 License - Non-Commercial - [original repository link](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) 5. **EleutherAI/Llema-7B-GGUF** - Apache 2.0 License - [original repository link](https://huggingface.co/EleutherAI/llemma_7b) Please also see [TheBloke](https://huggingface.co/TheBloke) for more information on GGUF.
THUdyh/Oryx-1.5-32B
THUdyh
2024-10-22T16:42:10Z
25
1
null
[ "safetensors", "oryx_qwen", "text-generation", "conversational", "en", "zh", "dataset:THUdyh/Oryx-SFT-Data", "arxiv:2409.12961", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-32B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2024-10-22T15:43:50Z
--- license: apache-2.0 datasets: - THUdyh/Oryx-SFT-Data base_model: - Qwen/Qwen2.5-32B-Instruct pipeline_tag: text-generation language: - en - zh --- # Oryx-1.5-32B ## Model Summary The Oryx-1.5 models are 7/32B parameter models trained on [Oryx-SFT-Data](https://huggingface.co/datasets/THUdyh/Oryx-SFT-Data), based on Qwen2.5 language model with a context window of 32K tokens. Oryx offers an on-demand solution to seamlessly and efficiently process visual inputs with arbitrary spatial sizes and temporal lengths. - **Repository:** https://github.com/Oryx-mllm/Oryx - **Languages:** English, Chinese - **Paper:** https://arxiv.org/abs/2409.12961 ## Use We provide a simple generation process for using our model. For more details, please refer to our [Github Repo](https://github.com/liuzuyan/oryx) ``` from oryx.model.builder import load_pretrained_model from oryx.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token from oryx.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN, IGNORE_INDEX from oryx.conversation import conv_templates, SeparatorStyle from PIL import Image import requests import copy import torch import sys import warnings from decord import VideoReader, cpu import numpy as np def load_video(self, video_path, max_frames_num,fps=1,force_sample=False): if max_frames_num == 0: return np.zeros((1, 336, 336, 3)) vr = VideoReader(video_path, ctx=cpu(0),num_threads=1) total_frame_num = len(vr) video_time = total_frame_num / vr.get_avg_fps() fps = round(vr.get_avg_fps()/fps) frame_idx = [i for i in range(0, len(vr), fps)] frame_time = [i/fps for i in frame_idx] if len(frame_idx) > max_frames_num or force_sample: sample_fps = max_frames_num uniform_sampled_frames = np.linspace(0, total_frame_num - 1, sample_fps, dtype=int) frame_idx = uniform_sampled_frames.tolist() frame_time = [i/vr.get_avg_fps() for i in frame_idx] frame_time = ",".join([f"{i:.2f}s" for i in frame_time]) spare_frames = vr.get_batch(frame_idx).asnumpy() # import pdb;pdb.set_trace() return spare_frames,frame_time,video_time pretrained = "THUdyh/Oryx-7B" model_name = "oryx_qwen" device = "cuda" device_map = "auto" tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, device_map=device_map) model.eval() video_path = "" max_frames_num = "64" video,frame_time,video_time = load_video(video_path, max_frames_num, 1, force_sample=True) video = image_processor.preprocess(video, return_tensors="pt")["pixel_values"].cuda().bfloat16() video = [video] video_data = (video, video) input_data = (video_data, (384, 384), "video") conv_template = "qwen_1_5" question = DEFAULT_IMAGE_TOKEN + "\nPlease describe this video in detail." conv = copy.deepcopy(conv_templates[conv_template]) conv.append_message(conv.roles[0], question) conv.append_message(conv.roles[1], None) prompt_question = conv.get_prompt() input_ids = tokenizer_image_token(prompt_question, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(device) output_ids = model.generate( inputs=input_ids, images=input_data[0][0], images_highres=input_data[0][1], modalities=video_data[2], do_sample=False, temperature=0, max_new_tokens=128, use_cache=True, ) text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True) print(text_outputs) ``` ### Results #### General Video Benchmark <img src="https://cdn-uploads.huggingface.co/production/uploads/652965773a416e1f2173443b/hKfOK0u3OXly_u4hgGLDB.png" alt="image/png" style="zoom: 33%;" /> #### Long-Form Video Understanding <img src="https://cdn-uploads.huggingface.co/production/uploads/652965773a416e1f2173443b/Xweq9f4OWkqeVc_FZIMuO.png" alt="image/png" style="zoom:33%;" /> #### Common Image Benchmark <img src="https://cdn-uploads.huggingface.co/production/uploads/652965773a416e1f2173443b/ybfroSA9WaKXtJbP_9cLR.png" alt="image/png" style="zoom:33%;" /> #### 3D Spatial Understanding <img src="https://cdn-uploads.huggingface.co/production/uploads/652965773a416e1f2173443b/5v8ACRzAoKS0FbcVBXZhT.png" alt="image/png" style="zoom:33%;" /> ### Model Architecture - **Architecture:** Pre-trained [Oryx-ViT](https://huggingface.co/THUdyh/Oryx-ViT) + Qwen-2.5-32B - **Data:** a mixture of 1.2M image/video data - **Precision:** BFloat16 #### Hardware & Software - **Hardware:** 64 * NVIDIA Tesla A100 - **Orchestration:** HuggingFace Trainer - **Code:** Pytorch ## Citation
xonic48/bert-finetuned-squad
xonic48
2024-10-22T16:34:14Z
117
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-10-22T12:47:44Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
cocktailpeanut/sd35turbo
cocktailpeanut
2024-10-22T16:21:48Z
11
2
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "en", "arxiv:2403.03206", "license:other", "diffusers:StableDiffusion3Pipeline", "region:us" ]
text-to-image
2024-10-22T15:56:30Z
--- license: other license_name: stabilityai-ai-community license_link: LICENSE.md tags: - text-to-image - stable-diffusion - diffusers inference: true extra_gated_prompt: >- By clicking "Agree", you agree to the [License Agreement](https://huggingface.co/stabilityai/stable-diffusion-3.5-large-turbo/blob/main/LICENSE.md) and acknowledge Stability AI's [Privacy Policy](https://stability.ai/privacy-policy). extra_gated_fields: Name: text Email: text Country: country Organization or Affiliation: text Receive email updates and promotions on Stability AI products, services, and research?: type: select options: - 'Yes' - 'No' What do you intend to use the model for?: type: select options: - Research - Personal use - Creative Professional - Startup - Enterprise I agree to the License Agreement and acknowledge Stability AI's Privacy Policy: checkbox language: - en pipeline_tag: text-to-image --- # Stable Diffusion 3.5 Large Turbo ![3.5 Large Turbo Demo Image](sd3.5_large_turbo_demo.png) ## Model ![MMDiT](mmdit.png) [Stable Diffusion 3.5 Large Turbo](https://stability.ai/news/introducing-stable-diffusion-3-5) is a Multimodal Diffusion Transformer (MMDiT) text-to-image model with [Adversarial Diffusion Distillation (ADD)](https://stability.ai/research/adversarial-diffusion-distillation) that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency, with a focus on fewer inference steps. Please note: This model is released under the [Stability Community License](https://stability.ai/community-license-agreement). Visit [Stability AI](https://stability.ai/license) to learn or [contact us](https://stability.ai/enterprise) for commercial licensing details. ### Model Description - **Developed by:** Stability AI - **Model type:** MMDiT text-to-image generative model - **Model Description:** This model generates images based on text prompts. It is an ADD-distilled [Multimodal Diffusion Transformer](https://arxiv.org/abs/2403.03206) that use three fixed, pretrained text encoders, and with QK-normalization. ### License - **Community License:** Free for research, non-commercial, and commercial use for organizations or individuals with less than $1M in total annual revenue. More details can be found in the [Community License Agreement](https://stability.ai/community-license-agreement). Read more at https://stability.ai/license. - **For individuals and organizations with annual revenue above $1M**: Please [contact us](https://stability.ai/enterprise) to get an Enterprise License. ### Model Sources For local or self-hosted use, we recommend [ComfyUI](https://github.com/comfyanonymous/ComfyUI) for node-based UI inference, or [diffusers](https://github.com/huggingface/diffusers) or [GitHub](https://github.com/Stability-AI/sd3.5) for programmatic use. - **ComfyUI:** [Github](https://github.com/comfyanonymous/ComfyUI), [Example Workflow](https://comfyanonymous.github.io/ComfyUI_examples/sd3/) - **Huggingface Space:** [Space](https://huggingface.co/spaces/stabilityai/stable-diffusion-3.5-large-turbo) - **Diffusers**: [See below](#using-with-diffusers). - **GitHub**: [GitHub](https://github.com/Stability-AI/sd3.5). - **API Endpoints:** - [Stability AI API](https://platform.stability.ai/docs/api-reference#tag/Generate/paths/~1v2beta~1stable-image~1generate~1sd3/post) - [Deepinfra](https://deepinfra.com/stabilityai/sd3.5) ### Implementation Details - **QK Normalization:** Implements the QK normalization technique to improve training Stability. - **Adversarial Diffusion Distillation (ADD)** (see the [technical report](https://stability.ai/research/adversarial-diffusion-distillation)), which allows sampling with 4 steps at high image quality. - **Text Encoders:** - CLIPs: [OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip), [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main), context length 77 tokens - T5: [T5-xxl](https://huggingface.co/google/t5-v1_1-xxl), context length 77/256 tokens at different stages of training - **Training Data and Strategy:** This model was trained on a wide variety of data, including synthetic data and filtered publicly available data. For more technical details of the original MMDiT architecture, please refer to the [Research paper](https://stability.ai/news/stable-diffusion-3-research-paper). ### Model Performance See [blog](https://stability.ai/news/introducing-stable-diffusion-3-5) for our study about comparative performance in prompt adherence and aesthetic quality. ## File Structure Click here to access the [Files and versions tab](https://huggingface.co/stabilityai/stable-diffusion-3.5-large-turbo/tree/main) ```│ ├── text_encoders/ (text_encoder/text_encoder_1/text_encoder_2 are for diffusers) │ ├── README.md │ ├── clip_g.safetensors │ ├── clip_l.safetensors │ ├── t5xxl_fp16.safetensors │ └── t5xxl_fp8_e4m3fn.safetensors │ ├── README.md ├── LICENSE ├── sd3_large_turbo.safetensors ├── SD3.5L_Turbo_example_workflow.json └── sd3_large_turbo_demo.png ** File structure below is for diffusers integration** ├── scheduler/ ├── text_encoder/ ├── text_encoder_2/ ├── text_encoder_3/ ├── tokenizer/ ├── tokenizer_2/ ├── tokenizer_3/ ├── transformer/ ├── vae/ └── model_index.json ``` ## Using with Diffusers Upgrade to the latest version of the [🧨 diffusers library](https://github.com/huggingface/diffusers) ``` pip install -U diffusers ``` and then you can run ```py import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large-turbo", torch_dtype=torch.bfloat16) pipe = pipe.to("cuda") image = pipe( "A capybara holding a sign that reads Hello Fast World", num_inference_steps=4, guidance_scale=0.0, ).images[0] image.save("capybara.png") ``` ### Quantizing the model with diffusers Reduce your VRAM usage and have the model fit on low VRAM GPUs ``` pip install bitsandbytes ``` ```py from diffusers import BitsAndBytesConfig, SD3Transformer2DModel from diffusers import StableDiffusion3Pipeline import torch model_id = "stabilityai/stable-diffusion-3.5-large-turbo" nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model_nf4 = SD3Transformer2DModel.from_pretrained( model_id, subfolder="transformer", quantization_config=nf4_config, torch_dtype=torch.bfloat16 ) t5_nf4 = T5EncoderModel.from_pretrained("diffusers/t5-nf4", torch_dtype=torch.bfloat16) pipeline = StableDiffusion3Pipeline.from_pretrained( model_id, transformer=model_nf4, text_encoder_3=t5_nf4, torch_dtype=torch.bfloat16 ) pipeline.enable_model_cpu_offload() prompt = "A whimsical and creative image depicting a hybrid creature that is a mix of a waffle and a hippopotamus, basking in a river of melted butter amidst a breakfast-themed landscape. It features the distinctive, bulky body shape of a hippo. However, instead of the usual grey skin, the creature's body resembles a golden-brown, crispy waffle fresh off the griddle. The skin is textured with the familiar grid pattern of a waffle, each square filled with a glistening sheen of syrup. The environment combines the natural habitat of a hippo with elements of a breakfast table setting, a river of warm, melted butter, with oversized utensils or plates peeking out from the lush, pancake-like foliage in the background, a towering pepper mill standing in for a tree. As the sun rises in this fantastical world, it casts a warm, buttery glow over the scene. The creature, content in its butter river, lets out a yawn. Nearby, a flock of birds take flight" image = pipeline( prompt=prompt, num_inference_steps=4, guidance_scale=0.0, max_sequence_length=512, ).images[0] image.save("whimsical.png") ``` ## Uses ### Intended Uses Intended uses include the following: * Generation of artworks and use in design and other artistic processes. * Applications in educational or creative tools. * Research on generative models, including understanding the limitations of generative models. All uses of the model must be in accordance with our [Acceptable Use Policy](https://stability.ai/use-policy). ### Out-of-Scope Uses The model was not trained to be factual or true representations of people or events. As such, using the model to generate such content is out-of-scope of the abilities of this model. ## Safety As part of our safety-by-design and responsible AI deployment approach, we take deliberate measures to ensure Integrity starts at the early stages of development. We implement safety measures throughout the development of our models. We have implemented safety mitigations that are intended to reduce the risk of certain harms, however we recommend that developers conduct their own testing and apply additional mitigations based on their specific use cases. For more about our approach to Safety, please visit our [Safety page](https://stability.ai/safety). ### Integrity Evaluation Our integrity evaluation methods include structured evaluations and red-teaming testing for certain harms. Testing was conducted primarily in English and may not cover all possible harms. ### Risks identified and mitigations: * Harmful content: We have used filtered data sets when training our models and implemented safeguards that attempt to strike the right balance between usefulness and preventing harm. However, this does not guarantee that all possible harmful content has been removed. TAll developers and deployers should exercise caution and implement content safety guardrails based on their specific product policies and application use cases. * Misuse: Technical limitations and developer and end-user education can help mitigate against malicious applications of models. All users are required to adhere to our [Acceptable Use Policy](https://stability.ai/use-policy), including when applying fine-tuning and prompt engineering mechanisms. Please reference the Stability AI Acceptable Use Policy for information on violative uses of our products. * Privacy violations: Developers and deployers are encouraged to adhere to privacy regulations with techniques that respect data privacy. ## Contact Please report any issues with the model or contact us: * Safety issues: [email protected] * Security issues: [email protected] * Privacy issues: [email protected] * License and general: https://stability.ai/license * Enterprise license: https://stability.ai/enterprise
minchyeom/ThinkerGemma-2-GGUF
minchyeom
2024-10-22T16:08:13Z
10
0
transformers
[ "transformers", "gguf", "reasoning", "llama-cpp", "gguf-my-repo", "en", "dataset:starsnatched/thinker-formatted-2", "base_model:minchyeom/ThinkerGemma-2", "base_model:quantized:minchyeom/ThinkerGemma-2", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-22T15:47:33Z
--- library_name: transformers tags: - reasoning - llama-cpp - gguf-my-repo datasets: - starsnatched/thinker-formatted-2 language: - en base_model: starsnatched/ThinkerGemma-2 --- It's on [Ollama](https://ollama.com/starsnatched/thinker)!! Use this system prompt: ``` You are a world-class AI system. Always respond in strict JSON format with a reasoning_steps array and a response field. Each reasoning step should represent one unit of thought, including observations, calculations, questions, realizations, corrections, etc. Once you realize you made a mistake in your reasoning steps, immediately correct it. Place your final response in the response field. Adhere to this JSON structure without exception. ```
KomeijiForce/inbedder-roberta-large
KomeijiForce
2024-10-22T16:06:07Z
767
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "en", "dataset:KomeijiForce/Inbedder-Pretrain-Data", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-02-29T06:53:43Z
--- license: mit datasets: - KomeijiForce/Inbedder-Pretrain-Data language: - en --- # [ACL2024] Answer is All You Need: Instruction-following Text Embedding via Answering the Question InBedder🛌 is a text embedder that is designed to follow instructions. Instruction-following text embedder can capture characteristics of texts specified by user instructions. InBedder offers a novel viewpoint that treats the instruction as a question about the input text and encodes the expected answers to obtain the representation accordingly. We show that InBedder is aware of instructions with different evaluation tasks. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64323dd503d81fa4d26deaf9/jLbqF-2uT8Aw9DsN7XCVG.png) The following is a use case from [https://github.com/zhang-yu-wei/InBedder/blob/main/UseCase.ipynb](https://github.com/zhang-yu-wei/InBedder/blob/main/UseCase.ipynb) ```python import torch from torch import nn from torch.nn.functional import gelu, cosine_similarity from transformers import AutoTokenizer, AutoModel, AutoModelForMaskedLM import numpy as np class InBedder(): def __init__(self, path='KomeijiForce/inbedder-roberta-large', device='cuda:0'): model = AutoModelForMaskedLM.from_pretrained(path) self.tokenizer = AutoTokenizer.from_pretrained(path) self.model = model.roberta self.dense = model.lm_head.dense self.layer_norm = model.lm_head.layer_norm self.device = torch.device(device) self.model = self.model.to(self.device) self.dense = self.dense.to(self.device) self.layer_norm = self.layer_norm.to(self.device) self.vocab = self.tokenizer.get_vocab() self.vocab = {self.vocab[key]:key for key in self.vocab} def encode(self, input_texts, instruction, n_mask): if type(instruction) == str: prompts = [instruction + self.tokenizer.mask_token*n_mask for input_text in input_texts] elif type(instruction) == list: prompts = [inst + self.tokenizer.mask_token*n_mask for inst in instruction] inputs = self.tokenizer(input_texts, prompts, padding=True, truncation=True, return_tensors='pt').to(self.device) mask = inputs.input_ids.eq(self.tokenizer.mask_token_id) outputs = self.model(**inputs) logits = outputs.last_hidden_state[mask] logits = self.layer_norm(gelu(self.dense(logits))) logits = logits.reshape(len(input_texts), n_mask, -1) logits = logits.mean(1) logits = (logits - logits.mean(1, keepdim=True)) / logits.std(1, keepdim=True) return logits inbedder = InBedder(path='KomeijiForce/inbedder-roberta-large', device='cpu') texts = ["I love cat!", "I love dog!", "I dislike cat!"] instruction = "What is the animal mentioned here?" embeddings = inbedder.encode(texts, instruction, 3) cosine_similarity(embeddings[:1], embeddings[1:], dim=1) # tensor([0.9374, 0.9917], grad_fn=<SumBackward1>) texts = ["I love cat!", "I love dog!", "I dislike cat!"] instruction = "What is emotion expressed here?" embeddings = inbedder.encode(texts, instruction, 3) cosine_similarity(embeddings[:1], embeddings[1:], dim=1) # tensor([0.9859, 0.8537], grad_fn=<SumBackward1>) ```
ihanif/whisper-turbo-ar-v2
ihanif
2024-10-22T16:05:55Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ps", "dataset:mozilla-foundation/common_voice_17_0", "base_model:openai/whisper-large-v3-turbo", "base_model:finetune:openai/whisper-large-v3-turbo", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-22T11:22:22Z
--- library_name: transformers language: - ps license: mit base_model: openai/whisper-large-v3-turbo tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_17_0 model-index: - name: Whisper Small PS - Hanif Rahman results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small PS - Hanif Rahman This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the Common Voice 17.0 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.5649 - eval_wer_ortho: 39.8068 - eval_wer: 39.2011 - eval_runtime: 655.7419 - eval_samples_per_second: 1.112 - eval_steps_per_second: 0.14 - epoch: 2.2676 - step: 1000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 200 - training_steps: 1000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
stdevelopment/mistral7bIns-dpo-v0rc0
stdevelopment
2024-10-22T16:05:09Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T16:00:40Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers base_model: mistralai/Mistral-7B-Instruct-v0.3 widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
winstonallo/bert_full_prod_data_100K
winstonallo
2024-10-22T16:04:35Z
107
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-22T15:39:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Neha13/Gujarati_Translation
Neha13
2024-10-22T15:57:10Z
138
0
transformers
[ "transformers", "safetensors", "IndicTrans", "text2text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text2text-generation
2024-10-22T14:49:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OverloadedOperator/tokomni-whisper-v2_0_6
OverloadedOperator
2024-10-22T15:55:48Z
5
0
null
[ "tensorboard", "safetensors", "whisper", "generated_from_trainer", "he", "dataset:OverloadedOperator/tests-101", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "region:us" ]
null
2024-10-22T15:40:56Z
--- language: - he license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - OverloadedOperator/tests-101 model-index: - name: Whisper Small He results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small He This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the TestDS dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.0
Abdou/arabic-tashkeel-flan-t5-small
Abdou
2024-10-22T15:51:05Z
124
1
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-11T18:20:40Z
--- library_name: transformers tags: [] --- # تَشْكِيلُ النُّصُوصِ الْعَرَبِيَّةِ تِلْقَائِياً This model is trained to add tashkeel (Arabic diacritics) to Arabic text, enhancing readability and pronunciation. It may be also helpful for data synthesis training models on other tasks, such as text-to-speech, language modeling, and more. ## Model Details The model is based on the [Google's FLAN-T5 small model](https://huggingface.co/google/flan-t5-small) trained on [Arabic Tashkeel Dataset](https://huggingface.co/datasets/Abdou/arabic-tashkeel-dataset). The tokenizer is trained on a subset of the same dataset. ## How to Use You can use this model to add tashkeel to Arabic text by using the following code: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch model_path = "Abdou/arabic-tashkeel-flan-t5-small" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForSeq2SeqLM.from_pretrained(model_path) def vocalize_text(text, model, tokenizer, max_length=256, num_beams=4, temperature=0.2, do_sample=False): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) model.eval() # Tokenize the input text inputs = tokenizer(text, return_tensors="pt", max_length=max_length, truncation=True, padding="max_length") inputs = {k: v.to(device) for k, v in inputs.items()} with torch.no_grad(): if do_sample: outputs = model.generate( **inputs, max_length=max_length, do_sample=True, temperature=temperature, ) else: outputs = model.generate( **inputs, max_length=max_length, num_beams=num_beams, early_stopping=True ) return tokenizer.decode(outputs[0], skip_special_tokens=True) # Quran texts quran_texts = [ "قل هو نبأ عظيم أنتم عنه معرضون", "إنما يخشى الله من عباده العلماء", "والله غالب على أمره ولكن أكثر الناس لا يعلمون", "وعلم ءادم الأسماء كلها ثم عرضهم على الملائكة فقال أنبؤوني بأسماء هؤلاء إن كنتم صادقين", "وإذ قال موسى لقومه يا قوم لم تؤذونني وقد تعلمون أني رسول الله إليكم", "ولله يسجد ما في السماوات وما في الارض من دابة والملاءكة وهم لا يستكبرون", "الذي أحسن كل شيء خلقه وبدأ خلق الإنسان من طين", ] # Hadith texts hadith_texts = [ "إن الله لا ينظر إلى صوركم وأموالكم ولكن ينظر إلى قلوبكم وأعمالكم", "عن أبي ذر جندب بن جنادة، وأبي عبدالرحمن معاذ بن جبل رضي الله عنهما، عن رسول الله ﷺ، قال: اتق الله حيثما كنت وأتبع السيئة الحسنة تمحها، وخالق الناس بخلق حسن . رواه الترمذي وقال: حديث حسن.", "المسلم من سلم المسلمون من لسانه ويده", "المؤمن القوي خير وأحب إلى الله من المؤمن الضعيف ، وفي كل خير.", ] # some Arabic texts arabic_texts = [ "إنما الأمم الأخلاق ما بقيت فإن هم ذهبت أخلاقهم ذهبوا", "يعد من أكبر علماء الأندلس وأكبر علماء الإسلام تصنيفًا وتأليفًا بعد الطبري، وهو إمام حافظ. فقيه ظاهري، ومجدد القول به، بل محيي المذهب بعد زواله في الشرق. ومتكلم وأديب وشاعر ونسّابة وعالم برجال الحديث وناقد محلل بل وصفه البعض بالفيلسوف كما عد من أوائل من قال بكروية الأرض، كما كان وزير سياسي لبني أمية، سلك طريق نبذ التقليد وتحرير الأتباع، قامت عليه جماعة من المالكية وشـُرد عن وطنه. توفي لاحقاً في منزله في أرض أبويه منت ليشم المعروفة بمونتيخار حالياً، وهي عزبة قريبة من ولبة. وأصل جده يزيد فارسي، أسلم وأول من دخل منهم بلاد المغرب، وكانت بلدهم قرطبة فولد ابن حزم بها في سلخ نهاية رمضان من سنة أربع وثمانين وثلاثمائة." ] texts = [ ("Quran Texts", quran_texts), ("Hadith Texts", hadith_texts), ("Arabic Texts", arabic_texts) ] for title, texts in texts: print(f"=============== {title} ===============") for text in texts: vocalized_result = vocalize_text(text, model, tokenizer) print(f"Input: \n{text}") print("="*10) print(f"Prediction: \n{vocalized_result}") print("="*10) ``` **Output:** ```plaintext =============== Quran Texts =============== Input: قل هو نبأ عظيم أنتم عنه معرضون ========== Prediction: قُلْ هُوَ نَبَأٌ عَظِيمٌ أَنْتُمْ عَنْهُ مُعْرِضُونَ ========== Input: إنما يخشى الله من عباده العلماء ========== Prediction: إِنَّمَا يَخْشَى ٱللَّهَ مِنْ عِبَادِهِ ٱلْعُلَمَآءُ ========== Input: والله غالب على أمره ولكن أكثر الناس لا يعلمون ========== Prediction: وَٱللَّهُ غَالِبٌ عَلَىٰٓ أَمْرِهِۦ وَلَٰكِنَّ أَكْثَرَ ٱلنَّاسِ لَا يَعْلَمُونَ ========== Input: وعلم ءادم الأسماء كلها ثم عرضهم على الملائكة فقال أنبؤوني بأسماء هؤلاء إن كنتم صادقين ========== Prediction: وَعَلِّمْ ءَادَمَ ٱلْأَسْمَآءَ كُلَّهَا ثُمَّ عَرَضَهُمْ عَلَى ٱلْمَلاَئِكَةِ فَقَالَ أَنبَؤُونِي بِأَسْمَاءِ هَٰٓؤُلَآءِ إِن كُنتُمْ صَادِقِينَ ========== Input: وإذ قال موسى لقومه يا قوم لم تؤذونني وقد تعلمون أني رسول الله إليكم ========== Prediction: وَإِذْ قَالَ مُوسَىٰ لِقَوْمِهِ يَا قَوْمِ لِمَ تُؤْذُونَنِي وَقَدْ تَعْلَمُونَ أَنِّي رَسُولُ اللَّهِ إِلَيْكُمْ ========== Input: ولله يسجد ما في السماوات وما في الارض من دابة والملاءكة وهم لا يستكبرون ========== Prediction: وَلِلَّهِ يَسْجُدُ مَا فِي السَّمَاوَاتِ وَمَا فِي الْارْضِ مِنْ دَابَّةٍ وَالْمَلَائِكَةُ وَهُمْ لَا يَسْتَكْبِرُونَ ========== Input: الذي أحسن كل شيء خلقه وبدأ خلق الإنسان من طين ========== Prediction: الَّذِي أَحْسَنَ كُلَّ شَيْءٍ خَلْقَهُ وَبَدَأَ خَلْقَ الْإِنْسَانِ مِنْ طِينٍ ========== =============== Hadith Texts =============== Input: إن الله لا ينظر إلى صوركم وأموالكم ولكن ينظر إلى قلوبكم وأعمالكم ========== Prediction: إِنَّ اللَّهَ لَا يَنْظُرُ إِلَىٰ صُوَرِكُمْ وَأَمْوَالِكُمْ وَلَكِنْ يَنْظُرُ إِلَىٰ قُلُوبِكُمْ وَأَعْمَالِكُمْ ========== Input: عن أبي ذر جندب بن جنادة، وأبي عبدالرحمن معاذ بن جبل رضي الله عنهما، عن رسول الله ﷺ، قال: اتق الله حيثما كنت وأتبع السيئة الحسنة تمحها، وخالق الناس بخلق حسن . رواه الترمذي وقال: حديث حسن. ========== Prediction: عَنْ أَبِي ذَرٍّ جُنْدُبِ بْنِ جُنَادَةَ، وَأَبِي عَبْدِالرَّحْمَنِ مُعَاذِ بْنِ جَبَلٍ رَضِيَ اللَّهُ عَنْهُمَا، عَنْ رَسُولِ اللَّهِ صَلَّى اللَّهُ عَلَيْهِ وَسَلَّمَ، قَالَ: اتَّقِ اللَّهَ حَيْثُمَا كُنْتَ وَأَتْبِعِ السَّيِّئَةَ الْحَسَنَةَ تَمْحُهَا، وَخَالِقِ النَّاسَ بِخُلُقٍ حَسَنٍ . رَوَاهُ التِّرْمِذِيُّ وَقَالَ: حَدِيثٌ حَسَنٌ. ========== Input: المسلم من سلم المسلمون من لسانه ويده ========== Prediction: الْمُسْلِمُ مَنْ سَلِمَ الْمُسْلِمُونَ مِنْ لِسَانِهِ وَيَدِهِ ========== Input: المؤمن القوي خير وأحب إلى الله من المؤمن الضعيف ، وفي كل خير. ========== Prediction: الْمُؤْمِنُ الْقَوِيُّ خَيْرٌ وَأَحَبُّ إِلَى اللَّهِ مِنَ الْمُؤْمِنِ الضَّعِيفِ ، وَفِي كُلِّ خَيْرٍ. ========== =============== Arabic Texts =============== Input: إنما الأمم الأخلاق ما بقيت فإن هم ذهبت أخلاقهم ذهبوا ========== Prediction: إِنَّمَا الأُمَمُ الأَخْلاقُ مَا بَقِيَتْ فَإِنْ هُمْ ذَهَبَتْ أَخْلاقُهُمْ ذَهَبُوا ========== Input: يعد من أكبر علماء الأندلس وأكبر علماء الإسلام تصنيفًا وتأليفًا بعد الطبري، وهو إمام حافظ. فقيه ظاهري، ومجدد القول به، بل محيي المذهب بعد زواله في الشرق. ومتكلم وأديب وشاعر ونسّابة وعالم برجال الحديث وناقد محلل بل وصفه البعض بالفيلسوف كما عد من أوائل من قال بكروية الأرض، كما كان وزير سياسي لبني أمية، سلك طريق نبذ التقليد وتحرير الأتباع، قامت عليه جماعة من المالكية وشـُرد عن وطنه. توفي لاحقاً في منزله في أرض أبويه منت ليشم المعروفة بمونتيخار حالياً، وهي عزبة قريبة من ولبة. وأصل جده يزيد فارسي، أسلم وأول من دخل منهم بلاد المغرب، وكانت بلدهم قرطبة فولد ابن حزم بها في سلخ نهاية رمضان من سنة أربع وثمانين وثلاثمائة. ========== Prediction: يُعَدُّ مِنْ أَكْبَرِ عُلَمَاءِ الْأَنْدَلُسِ وَأَكْبَرِ عُلَمَاءِ الْإِسْلَامِ تَصْنِيفًا وَتَأْلِيفًا بَعْدَ الطَّبَرِيِّ، وَهُوَ إِمَامٌ حَافِظٌ. فَقِيهٌ ظَاهِرِيٌّ، وَمُجَدِّدُ الْقَوْلِ بِهِ، بَلْ مُحْيِي الْمَذْهَبِ بَعْدَ زَوَالِهِ فِي الشَّرْقِ. وَمُتَكَلِّمٌ وَأَدِيبٌ وَشَاعِرٌ وَنَسَّابَةٌ وَعَالِمٌ بِرِجَالِ الْحَدِيثِ وَنَاقِدٌ مُحَلَّلٌ بَلْ وَصَفَهُ الْبَعْضُ بِالْفَيْلَسُوفِ كَمَا عُدَّ مِنْ أَوَائِلِ مَنْ قَالَ بِكُرَوِيَّةِ الْأَرْضِ، كَمَا كَانَ وَزِيرٌ سِيَاسِيٌّ لِبَنِي أُمَيَّةَ، سَلَكَ طَرِيقَ نَبْذِ التَّقْلِيدِ وَتَحْرِيرَ الْأَتْبَاعِ، قَامَتْ عَلَيْهِ جَمَاعَةٌ مِنَ الْمَالِكِيَّةِ وَشَـرُّدٌ عَنْ وَطَنِهِ. تُوُفِّيَ لَاحِقًا فِي مَنْزِلِهِ فِي أَرْضِ أَبَوَيْهِ مُنْتَ لِيَشُمَّ الْمَعْرُوفَةَ بِمُونتِيخَارٍ حَالِيًّا، وَهِيَ عَزْبَةٌ قَرِيبَةٌ مِنْ وَلُبَّةٍ. وَأَصْلُ جَدِّهِ يَزِيدُ فَارِسِيٌّ، أَسْلَمَ وَأَوَّلُ مَنْ دَخَلَ مِنْهُمْ بِلَادَ الْمَغْرِبِ، وَكَانَتْ بَلَدُهُمْ قُرْطُبَةَ فَوُلِدَ ابْنُ حَزْمٍ بِهَا فِي سَلْخِ نِهَايَةِ رَمَضَانَ مِنْ سَنَةِ أَرْبَعٍ وَثَمَانِينَ وَثَلَاثِمِائَةٍ. ========== ``` As you can see, the model is mostly accurate, but may make some mistakes, like `وَعَلِّمْ` instead of `وَعَلَّمَ` or `وَفِي كُلِّ خَيْرٍ` instead of `وَ فِي كُلٍّ خَيْرٌ`. By default, the above `vocalize_text()` function is using Beam Search decoding (`num_beams=4`), you can use sampling by setting `do_sample=True` and `temperature` to a value between 0 and 1. ## Limitations As mentioned in the [dataset card](https://huggingface.co/datasets/Abdou/arabic-tashkeel-dataset), the dataset is largely religious classical Arabic texts (~90%), so the model may not be perfect for Modern Standard Arabic or dialects.
angie-chen55/pythia2.8b-dpo-hh-checkpoint-6000
angie-chen55
2024-10-22T15:43:19Z
5
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T15:40:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zelk12/MT-Merge-MMB-gemma-2-9B
zelk12
2024-10-22T15:39:53Z
6
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:zelk12/MT-Merge-BB-gemma-2-MT5MT3-9B", "base_model:merge:zelk12/MT-Merge-BB-gemma-2-MT5MT3-9B", "base_model:zelk12/MT-Merge-MM-gemma-2-MT2MT1-9B", "base_model:merge:zelk12/MT-Merge-MM-gemma-2-MT2MT1-9B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T15:33:25Z
--- base_model: - zelk12/MT-Merge-BB-gemma-2-MT5MT3-9B - zelk12/MT-Merge-MM-gemma-2-MT2MT1-9B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [zelk12/MT-Merge-BB-gemma-2-MT5MT3-9B](https://huggingface.co/zelk12/MT-Merge-BB-gemma-2-MT5MT3-9B) * [zelk12/MT-Merge-MM-gemma-2-MT2MT1-9B](https://huggingface.co/zelk12/MT-Merge-MM-gemma-2-MT2MT1-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: zelk12/MT-Merge-MM-gemma-2-MT2MT1-9B - model: zelk12/MT-Merge-BB-gemma-2-MT5MT3-9B merge_method: slerp base_model: zelk12/MT-Merge-MM-gemma-2-MT2MT1-9B dtype: bfloat16 parameters: t: 0.5 ```
mav23/MN-12B-Celeste-V1.9-GGUF
mav23
2024-10-22T15:37:47Z
14
0
transformers
[ "transformers", "gguf", "en", "dataset:nothingiisreal/c2-logs-cleaned", "dataset:kalomaze/Opus_Instruct_25k", "dataset:nothingiisreal/Reddit-Dirty-And-WritingPrompts", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-22T13:46:17Z
--- library_name: transformers language: - en datasets: - nothingiisreal/c2-logs-cleaned - kalomaze/Opus_Instruct_25k - nothingiisreal/Reddit-Dirty-And-WritingPrompts license: apache-2.0 --- <style> h1, h2, h3, h4, h5, h6 { line-height: normal; margin-bottom: 0.5em; } h1 { font-size: 3em; } h2 { font-size: 1.6em; } p, ul, ol, strong, summary { font-size: 1.1em; } .line-spaceless { line-height: 1; margin: 0; padding: 0; } .half-space { line-height: 0.5em; margin-bottom: 0.25em; } .text-center { text-align: center; } .tiny-text { font-size: 0.8em; } </style> <h1 class="line-spaceless text-center">Celeste V1.9</h1> <p class="half-space text-center tiny-text">Based on Mistral Nemo 12B</p> <h2 style="color: red; font-weight: bold;">Read the Usage Tips Below! Use ChatML.</h2><h2>Join <a href="https://discord.gg/EWzsFddYAd">our Discord</a> for testing newer versions and news! We are also on KoboldAI</h2> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/QcU3xEgVu18jeFtMFxIw-.webp" alt="" width="800"/> **This is a story writing and roleplaying model** trained on [Mistral NeMo 12B Instruct](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) at 8K context using [Reddit Writing Prompts](https://huggingface.co/datasets/nothingiisreal/Reddit-Dirty-And-WritingPrompts), [Kalo's Opus 25K Instruct](https://huggingface.co/datasets/kalomaze/Opus_Instruct_25k) and <br>[c2 logs cleaned](https://huggingface.co/datasets/Sao10K/c2-Logs-Filtered) Thank you Pyroserenus for sponsoring this run! This version has improved NSFW, smarter and more active narration. It's also trained with ChatML tokens so there should be no EOS bleeding whatsoever. # FP8 [Dynamic (by Auri)](https://huggingface.co/aetherwiing/MN-12B-Celeste-V1.9-fp8-dynamic) # EXL2 [4bpw By Kingbri](https://huggingface.co/royallab/MN-12B-Celeste-V1.9-exl2/tree/4bpw)<br> [5bpw By Kingbri](https://huggingface.co/royallab/MN-12B-Celeste-V1.9-exl2/tree/5bpw)<br> [6bpw By Kingbri](https://huggingface.co/royallab/MN-12B-Celeste-V1.9-exl2/tree/6bpw) # GGUF If one doesn't work, try the other. [Static Quants](https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9-GGUF)<br> [IMatrix Quants](https://huggingface.co/bartowski/MN-12B-Celeste-V1.9-GGUF) # API [Featherless](https://featherless.ai/models/nothingiisreal/MN-12B-Celeste-V1.9) <br> [Infermatic](https://infermatic.ai/) <br> [OpenRouter](https://openrouter.ai/models/nothingiisreal/mn-celeste-12b) --- <h1>Usage Tips</h1> <p style="font-size: 20px; color: red; font-weight: bold;"> READ: If this is your first time using the model, use the provided system message and sampling settings below. Remove other jailbreaks and system messages until you get a feel for the model. </p> <span style="color: green; font-weight: bold;">If you read every single tip I promise you will get a much better experience as they are tailored for this model and its training data.</span> <h2>Sampler Settings for V1.9</h2> **Okay so there is actually 2 recommended ones this time** I usually start the first few messages with Stable and see how it goes. If it falls into repetition I switch to Creative. But you can also just use either the whole way through, creative may need a few swipes from time to time. <strong>> Stable</strong> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/1m18WnuomY8jEZTA87Iun.png" alt="" width="400"/> <strong>> Creative</strong> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/DaL2hWZst0yW34CYK4df8.png" alt="" width="400"/> Don't shy away from experimenting after you get a feel for the model though. <h2>Preset</h2> <br><span style="font-weight: bold;">ChatML with no system prompt. Yes we trained actual ChatML tokens this time.</span> You don't need a JB but it can still steer behaviour. <h2>System Prompt</h2> You can use system prompt, if you want to use one or have one to get started with:<br> ``` Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>. ``` <h2>Story Writing</h2> **Adding the below system prompt will likely increase the humanness of the prose** as we trained system prompts. You can also change it to NSFW, but you should try SFW regardless of whether you are writing NSFW or not.<br> You should also try forcing the assistant reply to start with a `*` due to how we trained on human stories. ``` You are a short story writer. Write a story based on prompt provided by user below. Mode: SFW` ``` If your first message is using human-like prose, Celeste will copy it in the next messages, check out the Showcase below. <h2>Swipes</h2> **Important: swipe 2-3 times if you dont like a response** This model gives wildly differing swipes. <h2>OOC Steering</h2> **Use this! It works quite well.** We specifically trained the model to accept instructions in the format "OOC: character should be more assertive" etc. It works, whether the very first message or thousands of tokens deep into the context. Combining this with editing the output (if you want,) makes the model is very steerable. <h2>"Dead Dove"</h2> For character cards with persistent motivations throughout the story, use world books at low depth [tutorial here](https://huggingface.co/nothingiisreal/how-to-use-ST-worldinfo) <h2>Fewshot</h2> First message and last few messages impact this model quite a lot in terms of style, hornyness, personality. **You don't need to have a first message but editing first few messages or having good ones are highly recommended.** **Formatting issues** often occur in first few messages, manually correct them or swipe. Seems to not be a problem with 12B though.<br> This model was trained on lots of different formatting types and message lengths. It can do any, just make sure the initial message is good and correct the second message if necessary. <br> <h2>Hornyness</h2> If the model is not horny enough then just edit the last character message or do an OOC: prompt, the model will pick up on it and build on it. (Or just give the char aphrodisiac pills lol) <br> The model is fine with SFW and doesn't make it NSFW unless you want. It is also able to maintain half-NSFW (aka slow burn) without devolving down into hardcore. If you want only SFW and are having troubles, there is probably some system prompt that will fix it, maybe at depth 1 or something. <br> <h2>Refusals</h2> As said, if instruct refusal (very rare,) prefill 2-3 words. **Refusal of romantic advances (which almost never happens on 12B,) are realistic and we think is good. Prefill if you don't like.** <br> <h2>Mistral Context</h2> While trained on 8K, the model should be able to inherit longer context from Mistral 12B. Should be at minimum 16K. <h2>Other Important Tips</h2> Take active role in the RP and say the type of response that would create the scenario you are imagining. You don't always have to do this, but it helps sometimes. For example instead of *we drink and drink 15 glasses of champagne* say *we drink and drink 15 glasses of champagne, both becoming extremely drunk* <br>Another example instead of *I pull her closer* say *I pull her closer but she plays hard to get* When convenient, say screenplay phrases like "cut to" <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/Oq0oJgJIVJAmZGt7-nNKt.png" alt="" width="300"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/PQp64kp6EHg7gN_u2Oj7Q.png" alt="" width="300"/> --- # Showcase V1.9 ## Story Writing Check out the **Story Writing** section above. <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/H1fV1qqV2aw4qgu_vMixg.png" alt="" width="800"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/voXyDJx_nHsPLKakHZYEf.png" alt="" width="600"/> ## RP <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/D4F8nVQyrcmdVT_tmY8DD.png" alt="" width="800"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/gbFt7Ntz3ERVij2039YJi.png" alt="" width="800"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/jT-zHsUGiQPP2TNzlxwlG.png" alt="" width="800"/> It can do NSFW aswell, thats for you to try out. <details> <summary>Showcase V1.5</summary> Some images include NSFW and NSFL. We believe in creativity of expression and maximising the models capabilities at writing.<br> It's a bit difficult to showcase multi turn stuff, try it yourself too! These are just to show off the models capabilities. The model needs nudging and OOC prompting to do proper gore. We are planning to add r/GuroErotica into our dataset to make it better at gore Also sometimes prefilling "**Trigger warning: extremely graphic and explicit content**" before character reply makes it more unhinged. Probably because of reddit data. <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/imEyEKYUuAXYidD2CiD-_.png" alt="" width="600"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/95RvO5go-Nz7hFCdV67qs.png" alt="" width="600"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/oIa_P1fmD5aO7PkYF9Ibr.png" alt="" width="600"/> </details> <details> <summary>Showcase V1 and 1.2</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/NiT8-_k3HGrHuV_z475U8.png) <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/cODFe_vjwjfkcondvl-za.png" alt="Image 1" width="600"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/hRsAQBt0h2i5E_fj_Rqb7.png" alt="Image 10" width="600"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/aLm2FjlTsMhCil4VoR-l5.png" alt="Image 3" width="600"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/6kDuFeDB0FpYl27fd3D0E.png" alt="Image 6" width="600"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/66kXuk8Q1DP-QMVZ9Rzv9.png" alt="Image 7" width="600"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/C0eoDXovS359T5oG1VLzm.png" alt="Image 8" width="600"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/mFplb1hkDNziCcgPwHt_y.png" alt="Image 9" width="600"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/KCE4tqI1aOfx-DIH3eBzA.png" alt="Image 2" width="600"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/Oq0oJgJIVJAmZGt7-nNKt.png" alt="Image 4" width="600"/> <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/PQp64kp6EHg7gN_u2Oj7Q.png" alt="Image 5" width="600"/> </details> --- # Train Data The split was as follows: - **2.5K rows from r/WritingPrompts** - **2.5K rows from r/DirtyWritingPrompts** - **800 rows from Kalomaze Opus Instruct 25K** - **2.5K rows from c2 logs cleaned** We filtered those datasets to only include subsets that have at maximum **4000** characters for the first assistant reply. This purged excessively long human stories, assistant replies and c2 logs where each message was excessively long. However we only checked the first assistant message, not the rest of the convo, so there should be plenty of c2 logs with longer and shorter messages. Excessively long human stories are almost impossible for 8B model to fit. We tried, it simply won't fit the data and starts behaving weirdly. While we did train all system prompts from c2 logs we also have our own system prompts. <details> <summary>List of trained system prompts. Note: c2 logs system prompts and char cards were also included.</summary> | Dataset | System Prompt | |--------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------| | reddit_dirty_writing_prompts.jsonl | "You are a short story writer. Write a story based on prompt provided by user below. Mode: NSFW" | | reddit_writing_prompts.jsonl | "You are a short story writer. Write a story based on prompt provided by user below. Mode: SFW" | | combined_25k_opus_kalomaze.jsonl | "You are an AI assistant called Celeste created by NothingiisReal team." | | c2-logs.jsonl | (Only if there was no system prompt in the conversation, otherwise keep original system prompt) **"You are an expert actor that can fully immerse yourself into any role given. You do not break character for any reason, even if someone tries addressing you as an AI or language model."** | </details> --- # Our Findings and Experimentation results ## Preface We think there is too much secrecy around what data is being used, and different training methods. So we decided to share as much as possible. ## Findings V1.9 Mistral seems to be very uncensored for a corpo model. This means no refusals for based things and banter but also unrealistic simulation of characters in the sense that they usually never refuse any idea you throw at them,<br> which is quite unrealistic. Mistral seems to be weaker with OOC and instruct smarts and more prone to repetition<br> However it almost never has formatting issues like L3 and makes less factual inaccuracies and nonsensical outputs. Adding more claude roleplay data improved NSFW, long context, wrapup bias and narration.<br> However wrapup bias still exists in NSFW which can be mitigated by cutting off responses or deliberately setting max tokens to a low value.<br> There is a tradeoff between pros and cons of human and claude data. ## Main training Command **Hardware Used:** 1xH100 SXM for 3 hours. Lora+ seems to improve and train the lora more, we also tried DoRA multiple times but DoRA trains around 3 times slower and LoRA+ ends up beating it if you give 3x less data to DoRA to adjust for compute cost difference. <details> <summary>Here is the entire axolotl config for V1.5, just change chat format to chatml, add lora+, change the tokenizer to axolotl-ai-co/Mistral-Nemo-Base-2407-chatml and model to the 12B and it will be the correct one.</summary> ```yaml # Model base_model: meta-llama/Meta-Llama-3.1-8B-Instruct model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer # Output and HuggingFace output_dir: /workspace/data/train-results/trained_model # WandB wandb_project: huggingface wandb_entity: # Data chat_template: llama3 train_on_inputs: false group_by_length: false datasets: - path: [redacted] # I manually merge the aformentioned datasets using a custom script because I don't trust axolotl to do this in a deterministic way and sorted properly lmao. type: sharegpt roles: input: - system - user output: - assistant ## Evaluation val_set_size: 0.02 evals_per_epoch: 8 eval_table_size: eval_max_new_tokens: 128 # Technical aspects sequence_len: 8192 save_safetensors: true saves_per_epoch: 2 logging_steps: 1 special_tokens: pad_token: <pad> # Quantization bf16: auto fp16: tf32: false ## For LoRA load_in_8bit: false load_in_4bit: false # LoRA peft_use_dora: true adapter: lora # or qlora lora_model_dir: lora_r: 256 lora_alpha: 128 lora_dropout: 0.1 lora_target_linear: true lora_fan_in_fan_out: lora_target_modules: - embed_tokens - lm_head # Training hyperparameters # max_steps: num_epochs: 2 # Anti Overfit and Stability weight_decay: 0.0 max_grad_norm: 1.0 ## Learning Rate warmup_ratio: 0.05 learning_rate: 0.000008 lr_scheduler: cosine_with_min_lr lr_scheduler_kwargs: min_lr: 0.0000024 optimizer: paged_adamw_8bit ## Batch Size gradient_accumulation_steps: 1 micro_batch_size: 2 # Batch size per gpu = micro_batch_size * gradient_accumulation_steps eval_batch_size: 2 # Optimizations pad_to_sequence_len: true sample_packing: true eval_sample_packing: false flash_attention: true xformers_attention: gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false local_rank: # deepspeed: # /workspace/axolotl/deepspeed_configs/zero2.json # Only use with multi gpu # zero3_bf16.json # fsdp: # - full_shard # - auto_wrap # fsdp_config: # fsdp_limit_all_gathers: true # fsdp_sync_module_states: true # fsdp_offload_params: true # fsdp_use_orig_params: false # fsdp_cpu_ram_efficient_loading: true # fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP # fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer # fsdp_state_dict_type: FULL_STATE_DICT # fsdp_sharding_strategy: FULL_SHARD # Misc early_stopping_patience: debug: ``` </details> --- Wow, you've read all of that? You seem like the person that would join our [discord](https://discord.gg/EWzsFddYAd) 70B at some point? ;) We are closer than ever to this. For real this time. If you want to support me you can do so [here](https://ko-fi.com/LLemmy) - Lemmy <br> And if you'd like to support me - here's my Metamask (ETH) wallet - `0xb4B5Df1D0CFf327AA2cd8fCa1815eADfFA2F9369` - Auri/Aetherwiing
andriadze/anthracite-magnum-v3-34b-awq
andriadze
2024-10-22T15:33:58Z
10
0
null
[ "safetensors", "llama", "awq", "text-generation-inference", "text-generation", "conversational", "base_model:anthracite-org/magnum-v3-34b", "base_model:quantized:anthracite-org/magnum-v3-34b", "4-bit", "region:us" ]
text-generation
2024-10-17T13:57:49Z
--- base_model: - anthracite-org/magnum-v3-34b pipeline_tag: text-generation tags: - awq - text-generation-inference --- ## Model Details This is a AWQ GEMM quant of magnum-v3-34b: https://huggingface.co/anthracite-org/magnum-v3-34b ### Model Description Model has been quantized on 6xRTX4090, here are quantization parameters: "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM"
BroAlanTaps/GPT2-large-128-50000steps
BroAlanTaps
2024-10-22T15:28:07Z
120
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T15:26:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cuongdev/tonghop-v2-2500
cuongdev
2024-10-22T14:59:12Z
29
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-10-22T14:55:41Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### tonghop-v2-2500 Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
ctranslate2-4you/Mistral-7B-Instruct-v0.3-ct2-int8
ctranslate2-4you
2024-10-22T14:54:48Z
5
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
2024-10-22T13:41:44Z
--- license: apache-2.0 --- # Model Card for Mistral-7B-Instruct-v0.3 The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3. Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md) - Extended vocabulary to 32768 - Supports v3 Tokenizer - Supports function calling ## Installation It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling. ``` pip install mistral_inference ``` ## Download ```py from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/Mistral-7B-Instruct-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path) ``` ### Chat After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using ``` mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256 ``` ### Instruct following ```py from mistral_inference.model import Transformer from mistral_inference.generate import generate from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3") model = Transformer.from_folder(mistral_models_path) completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")]) tokens = tokenizer.encode_chat_completion(completion_request).tokens out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0]) print(result) ``` ### Function calling ```py from mistral_common.protocol.instruct.tool_calls import Function, Tool from mistral_inference.model import Transformer from mistral_inference.generate import generate from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3") model = Transformer.from_folder(mistral_models_path) completion_request = ChatCompletionRequest( tools=[ Tool( function=Function( name="get_current_weather", description="Get the current weather", parameters={ "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "format": { "type": "string", "enum": ["celsius", "fahrenheit"], "description": "The temperature unit to use. Infer this from the users location.", }, }, "required": ["location", "format"], }, ) ) ], messages=[ UserMessage(content="What's the weather like today in Paris?"), ], ) tokens = tokenizer.encode_chat_completion(completion_request).tokens out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0]) print(result) ``` ## Generate with `transformers` If you want to use Hugging Face `transformers` to generate text, you can do something like this. ```py from transformers import pipeline messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3") chatbot(messages) ``` ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
huihui-ai/Llama-3.2-11B-Vision-Instruct-abliterated
huihui-ai
2024-10-22T14:54:34Z
6,704
21
transformers
[ "transformers", "safetensors", "mllama", "image-text-to-text", "facebook", "meta", "pytorch", "llama", "llama-3", "abliterated", "uncensored", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-11B-Vision-Instruct", "base_model:finetune:meta-llama/Llama-3.2-11B-Vision-Instruct", "license:llama3.2", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-10-19T14:44:04Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers license: llama3.2 pipeline_tag: image-text-to-text base_model: meta-llama/Llama-3.2-11B-Vision-Instruct tags: - facebook - meta - pytorch - llama - llama-3 - abliterated - uncensored --- # huihui-ai/Llama-3.2-11B-Vision-Instruct-abliterated This is an uncensored version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. It was only the text part that was processed, not the image part.
ddobokki/ko-trocr
ddobokki
2024-10-22T14:54:04Z
921
23
transformers
[ "transformers", "pytorch", "safetensors", "vision-encoder-decoder", "image-text-to-text", "ocr", "image-to-text", "ko", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-to-text
2023-03-09T05:43:24Z
--- language: - ko tags: - ocr widget: - src: https://raw.githubusercontent.com/ddobokki/ocr_img_example/master/g.jpg example_title: word1 - src: https://raw.githubusercontent.com/ddobokki/ocr_img_example/master/khs.jpg example_title: word2 - src: https://raw.githubusercontent.com/ddobokki/ocr_img_example/master/m.jpg example_title: word3 pipeline_tag: image-to-text license: apache-2.0 --- # korean trocr model - trocr 모델은 디코더의 토크나이저에 없는 글자는 ocr 하지 못하기 때문에, 초성을 사용하는 토크나이저를 사용하는 디코더 모델을 사용하여 초성도 UNK로 나오지 않게 만든 trocr 모델입니다. - [2023 교원그룹 AI OCR 챌린지](https://dacon.io/competitions/official/236042/overview/description) 에서 얻었던 노하우를 활용하여 제작하였습니다. ## train datasets AI Hub - [다양한 형태의 한글 문자 OCR](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=91) - [공공행정문서 OCR](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=88) ## model structure - encoder : [trocr-base-stage1's encoder](https://huggingface.co/microsoft/trocr-base-stage1) - decoder : [KR-BERT-char16424](https://huggingface.co/snunlp/KR-BERT-char16424) ## how to use ```python from transformers import TrOCRProcessor, VisionEncoderDecoderModel, AutoTokenizer import requests import unicodedata from io import BytesIO from PIL import Image processor = TrOCRProcessor.from_pretrained("ddobokki/ko-trocr") model = VisionEncoderDecoderModel.from_pretrained("ddobokki/ko-trocr") tokenizer = AutoTokenizer.from_pretrained("ddobokki/ko-trocr") url = "https://raw.githubusercontent.com/ddobokki/ocr_img_example/master/g.jpg" response = requests.get(url) img = Image.open(BytesIO(response.content)) pixel_values = processor(img, return_tensors="pt").pixel_values generated_ids = model.generate(pixel_values, max_length=64) generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] generated_text = unicodedata.normalize("NFC", generated_text) print(generated_text) ```
zelk12/MT-Merge-MU-gemma-2-MT1MT4-9B
zelk12
2024-10-22T14:53:48Z
6
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:zelk12/MT1-gemma-2-9B", "base_model:merge:zelk12/MT1-gemma-2-9B", "base_model:zelk12/MT4-gemma-2-9B", "base_model:merge:zelk12/MT4-gemma-2-9B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T14:47:26Z
--- base_model: - zelk12/MT1-gemma-2-9B - zelk12/MT4-gemma-2-9B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [zelk12/MT1-gemma-2-9B](https://huggingface.co/zelk12/MT1-gemma-2-9B) * [zelk12/MT4-gemma-2-9B](https://huggingface.co/zelk12/MT4-gemma-2-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: zelk12/MT1-gemma-2-9B - model: zelk12/MT4-gemma-2-9B merge_method: slerp base_model: zelk12/MT1-gemma-2-9B dtype: bfloat16 parameters: t: 0.5 ```
sujithatz/finbot-phi3.5-merged
sujithatz
2024-10-22T14:49:30Z
118
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T14:42:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sok-fm/person_not_person_Labse_v2
sok-fm
2024-10-22T14:46:45Z
106
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-22T13:58:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
1MK26/Final_FT_BART
1MK26
2024-10-22T14:45:20Z
108
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-22T14:44:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zelk12/MT-Merge-GP-gemma-2-MT2MT1-9B
zelk12
2024-10-22T14:43:25Z
7
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:zelk12/MT1-gemma-2-9B", "base_model:merge:zelk12/MT1-gemma-2-9B", "base_model:zelk12/MT2-gemma-2-9B", "base_model:merge:zelk12/MT2-gemma-2-9B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T14:36:47Z
--- base_model: - zelk12/MT2-gemma-2-9B - zelk12/MT1-gemma-2-9B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [zelk12/MT2-gemma-2-9B](https://huggingface.co/zelk12/MT2-gemma-2-9B) * [zelk12/MT1-gemma-2-9B](https://huggingface.co/zelk12/MT1-gemma-2-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: zelk12/MT2-gemma-2-9B - model: zelk12/MT1-gemma-2-9B merge_method: slerp base_model: zelk12/MT2-gemma-2-9B dtype: bfloat16 parameters: t: 0.5 ```
sai1881/Master_3v
sai1881
2024-10-22T14:43:01Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-1B", "base_model:adapter:meta-llama/Llama-3.2-1B", "region:us" ]
null
2024-10-20T15:32:57Z
--- base_model: meta-llama/Llama-3.2-1B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
Shakker-Labs/SD3.5-LoRA-Futuristic-Bzonze-Colored
Shakker-Labs
2024-10-22T14:40:18Z
696
26
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:stabilityai/stable-diffusion-3.5-large", "base_model:adapter:stabilityai/stable-diffusion-3.5-large", "license:other", "region:us" ]
text-to-image
2024-10-22T07:25:48Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: a woman, Futuristic bzonze-colored parameters: negative_prompt: (lowres, low quality, worst quality) output: url: images/b8b98770d257ab5b8fdeee37bcf61e85c562b45c5bb79f0c2708361b.jpg - text: a cup, Futuristic bzonze-colored parameters: negative_prompt: (lowres, low quality, worst quality) output: url: images/6371e4e34450732c155aa1205f0502dd7e9839ac61a6ac8a460c0282.jpg - text: a lion, Futuristic bzonze-colored parameters: negative_prompt: (lowres, low quality, worst quality) output: url: images/fefeaac1e88b5883abdf0bc0403cf7c592104729148cc93ffe838b26.jpg base_model: stabilityai/stable-diffusion-3.5-large instance_prompt: Futuristic bzonze-colored license: other license_name: stabilityai-ai-community license_link: >- https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md --- # SD3.5-LoRA-Futuristic-Bzonze-Colored <Gallery /> ## Trigger words You should use `Futuristic bzonze-colored` to trigger the image generation. ## Inference ```python import torch from diffusers import StableDiffusion3Pipeline # pip install diffusers>=0.31.0 pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large", torch_dtype=torch.bfloat16) pipe.load_lora_weights("Shakker-Labs/SD3.5-LoRA-Futuristic-Bzonze-Colored", weight_name="SD35-lora-Futuristic-Bzonze-Colored.safetensors") pipe.fuse_lora(lora_scale=1.0) pipe.to("cuda") prompt = "a cup, Futuristic bzonze-colored" negative_prompt = "(lowres, low quality, worst quality)" image = pipe(prompt=prompt, negative_prompt=negative_prompt num_inference_steps=24, guidance_scale=4.0, width=960, height=1280, ).images[0] image.save(f"toy_example.jpg") ```
frankjoshua/stable-diffusion-3.5-large
frankjoshua
2024-10-22T14:36:33Z
44
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "en", "arxiv:2403.03206", "license:other", "diffusers:StableDiffusion3Pipeline", "region:us" ]
text-to-image
2024-11-11T22:47:34Z
--- license: other license_name: stabilityai-ai-community license_link: LICENSE.md tags: - text-to-image - stable-diffusion - diffusers inference: true extra_gated_prompt: >- By clicking "Agree", you agree to the [License Agreement](https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md) and acknowledge Stability AI's [Privacy Policy](https://stability.ai/privacy-policy). extra_gated_fields: Name: text Email: text Country: country Organization or Affiliation: text Receive email updates and promotions on Stability AI products, services, and research?: type: select options: - 'Yes' - 'No' What do you intend to use the model for?: type: select options: - Research - Personal use - Creative Professional - Startup - Enterprise I agree to the License Agreement and acknowledge Stability AI's Privacy Policy: checkbox language: - en pipeline_tag: text-to-image --- # Stable Diffusion 3.5 Large ![3.5 Large Demo Image](sd3.5_large_demo.png) ## Model ![MMDiT](mmdit.png) [Stable Diffusion 3.5 Large](https://stability.ai/news/introducing-stable-diffusion-3-5) is a Multimodal Diffusion Transformer (MMDiT) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency. Please note: This model is released under the [Stability Community License](https://stability.ai/community-license-agreement). Visit [Stability AI](https://stability.ai/license) to learn or [contact us](https://stability.ai/enterprise) for commercial licensing details. ### Model Description - **Developed by:** Stability AI - **Model type:** MMDiT text-to-image generative model - **Model Description:** This model generates images based on text prompts. It is a [Multimodal Diffusion Transformer](https://arxiv.org/abs/2403.03206) that use three fixed, pretrained text encoders, and with QK-normalization to improve training stability. ### License - **Community License:** Free for research, non-commercial, and commercial use for organizations or individuals with less than $1M in total annual revenue. More details can be found in the [Community License Agreement](https://stability.ai/community-license-agreement). Read more at https://stability.ai/license. - **For individuals and organizations with annual revenue above $1M**: please [contact us](https://stability.ai/enterprise) to get an Enterprise License. ### Model Sources For local or self-hosted use, we recommend [ComfyUI](https://github.com/comfyanonymous/ComfyUI) for node-based UI inference, or [diffusers](https://github.com/huggingface/diffusers) or [GitHub](https://github.com/Stability-AI/sd3.5) for programmatic use. - **ComfyUI:** [Github](https://github.com/comfyanonymous/ComfyUI), [Example Workflow](https://comfyanonymous.github.io/ComfyUI_examples/sd3/) - **Huggingface Space:** [Space](https://huggingface.co/spaces/stabilityai/stable-diffusion-3.5-large) - **Diffusers**: [See below](#using-with-diffusers). - **GitHub**: [GitHub](https://github.com/Stability-AI/sd3.5). - **API Endpoints:** - [Stability AI API](https://platform.stability.ai/docs/api-reference#tag/Generate/paths/~1v2beta~1stable-image~1generate~1sd3/post) - [Replicate](https://replicate.com/stability-ai/stable-diffusion-3.5-large) - [Deepinfra](https://deepinfra.com/stabilityai/sd3.5) ### Implementation Details - **QK Normalization:** Implements the QK normalization technique to improve training Stability. - **Text Encoders:** - CLIPs: [OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip), [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main), context length 77 tokens - T5: [T5-xxl](https://huggingface.co/google/t5-v1_1-xxl), context length 77/256 tokens at different stages of training - **Training Data and Strategy:** This model was trained on a wide variety of data, including synthetic data and filtered publicly available data. For more technical details of the original MMDiT architecture, please refer to the [Research paper](https://stability.ai/news/stable-diffusion-3-research-paper). ### Model Performance See [blog](https://stability.ai/news/introducing-stable-diffusion-3-5) for our study about comparative performance in prompt adherence and aesthetic quality. ## File Structure Click here to access the [Files and versions tab](https://huggingface.co/stabilityai/stable-diffusion-3.5-large/tree/main) ```│ ├── text_encoders/ │ ├── README.md │ ├── clip_g.safetensors │ ├── clip_l.safetensors │ ├── t5xxl_fp16.safetensors │ └── t5xxl_fp8_e4m3fn.safetensors │ ├── README.md ├── LICENSE ├── sd3_large.safetensors ├── SD3.5L_example_workflow.json └── sd3_large_demo.png ** File structure below is for diffusers integration** ├── scheduler/ ├── text_encoder/ ├── text_encoder_2/ ├── text_encoder_3/ ├── tokenizer/ ├── tokenizer_2/ ├── tokenizer_3/ ├── transformer/ ├── vae/ └── model_index.json ``` ## Using with Diffusers Upgrade to the latest version of the [🧨 diffusers library](https://github.com/huggingface/diffusers) ``` pip install -U diffusers ``` and then you can run ```py import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large", torch_dtype=torch.bfloat16) pipe = pipe.to("cuda") image = pipe( "A capybara holding a sign that reads Hello World", num_inference_steps=28, guidance_scale=3.5, ).images[0] image.save("capybara.png") ``` ### Quantizing the model with diffusers Reduce your VRAM usage and have the model fit on 🤏 VRAM GPUs ``` pip install bitsandbytes ``` ```py from diffusers import BitsAndBytesConfig, SD3Transformer2DModel from diffusers import StableDiffusion3Pipeline import torch model_id = "stabilityai/stable-diffusion-3.5-large" nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model_nf4 = SD3Transformer2DModel.from_pretrained( model_id, subfolder="transformer", quantization_config=nf4_config, torch_dtype=torch.bfloat16 ) pipeline = StableDiffusion3Pipeline.from_pretrained( model_id, transformer=model_nf4, torch_dtype=torch.bfloat16 ) pipeline.enable_model_cpu_offload() prompt = "A whimsical and creative image depicting a hybrid creature that is a mix of a waffle and a hippopotamus, basking in a river of melted butter amidst a breakfast-themed landscape. It features the distinctive, bulky body shape of a hippo. However, instead of the usual grey skin, the creature's body resembles a golden-brown, crispy waffle fresh off the griddle. The skin is textured with the familiar grid pattern of a waffle, each square filled with a glistening sheen of syrup. The environment combines the natural habitat of a hippo with elements of a breakfast table setting, a river of warm, melted butter, with oversized utensils or plates peeking out from the lush, pancake-like foliage in the background, a towering pepper mill standing in for a tree. As the sun rises in this fantastical world, it casts a warm, buttery glow over the scene. The creature, content in its butter river, lets out a yawn. Nearby, a flock of birds take flight" image = pipeline( prompt=prompt, num_inference_steps=28, guidance_scale=4.5, max_sequence_length=512, ).images[0] image.save("whimsical.png") ``` ### Fine-tuning Please see the fine-tuning guide [here](https://stabilityai.notion.site/Stable-Diffusion-3-5-Large-Fine-tuning-Tutorial-11a61cdcd1968027a15bdbd7c40be8c6). ## Uses ### Intended Uses Intended uses include the following: * Generation of artworks and use in design and other artistic processes. * Applications in educational or creative tools. * Research on generative models, including understanding the limitations of generative models. All uses of the model must be in accordance with our [Acceptable Use Policy](https://stability.ai/use-policy). ### Out-of-Scope Uses The model was not trained to be factual or true representations of people or events. As such, using the model to generate such content is out-of-scope of the abilities of this model. ## Safety As part of our safety-by-design and responsible AI deployment approach, we take deliberate measures to ensure Integrity starts at the early stages of development. We implement safety measures throughout the development of our models. We have implemented safety mitigations that are intended to reduce the risk of certain harms, however we recommend that developers conduct their own testing and apply additional mitigations based on their specific use cases. For more about our approach to Safety, please visit our [Safety page](https://stability.ai/safety). ### Integrity Evaluation Our integrity evaluation methods include structured evaluations and red-teaming testing for certain harms. Testing was conducted primarily in English and may not cover all possible harms. ### Risks identified and mitigations: * Harmful content: We have used filtered data sets when training our models and implemented safeguards that attempt to strike the right balance between usefulness and preventing harm. However, this does not guarantee that all possible harmful content has been removed. TAll developers and deployers should exercise caution and implement content safety guardrails based on their specific product policies and application use cases. * Misuse: Technical limitations and developer and end-user education can help mitigate against malicious applications of models. All users are required to adhere to our [Acceptable Use Policy](https://stability.ai/use-policy), including when applying fine-tuning and prompt engineering mechanisms. Please reference the Stability AI Acceptable Use Policy for information on violative uses of our products. * Privacy violations: Developers and deployers are encouraged to adhere to privacy regulations with techniques that respect data privacy. ### Contact Please report any issues with the model or contact us: * Safety issues: [email protected] * Security issues: [email protected] * Privacy issues: [email protected] * License and general: https://stability.ai/license * Enterprise license: https://stability.ai/enterprise
noneUsername/magnum-v4-12b-W8A8-Dynamic-Per-Token
noneUsername
2024-10-22T14:34:35Z
5
0
null
[ "safetensors", "mistral", "base_model:anthracite-org/magnum-v4-12b", "base_model:quantized:anthracite-org/magnum-v4-12b", "8-bit", "compressed-tensors", "region:us" ]
null
2024-10-22T14:15:01Z
--- base_model: - anthracite-org/magnum-v4-12b --- vllm (pretrained=/root/autodl-tmp/magnum-v4-12b-W8A8,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr| |-----|------:|----------------|-----:|-----------|---|----:|---|-----:| |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.776|± |0.0264| | | |strict-match | 5|exact_match|↑ |0.736|± |0.0279| vllm (pretrained=/root/autodl-tmp/magnum-v4-12b-W8A8,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=float16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr| |-----|------:|----------------|-----:|-----------|---|----:|---|-----:| |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.768|± |0.0268| | | |strict-match | 5|exact_match|↑ |0.744|± |0.0277| vllm (pretrained=/root/autodl-tmp/magnum-v4-12b,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr| |-----|------:|----------------|-----:|-----------|---|----:|---|-----:| |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.784|± |0.0261| | | |strict-match | 5|exact_match|↑ |0.752|± |0.0274| vllm (pretrained=/root/autodl-tmp/magnum-v4-12b,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=float16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr| |-----|------:|----------------|-----:|-----------|---|----:|---|-----:| |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.776|± |0.0264| | | |strict-match | 5|exact_match|↑ |0.748|± |0.0275|
OpenWebVoyager/OpenWebVoyager-opt-3
OpenWebVoyager
2024-10-22T14:25:40Z
14
0
null
[ "safetensors", "idefics2", "license:apache-2.0", "region:us" ]
null
2024-10-22T12:50:06Z
--- license: apache-2.0 ---
Re31egan/FineLlama-3.1-8B-GGUF
Re31egan
2024-10-22T14:22:12Z
22
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit", "base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-21T17:49:18Z
--- base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** Re31egan - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Jcrow06/fine_tuned_10012023
Jcrow06
2024-10-22T14:21:33Z
161
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-22T14:16:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abdelnour131/whisper-small-dv
abdelnour131
2024-10-22T14:16:57Z
78
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-22T13:50:01Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-small-dv results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.3246753246753247 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-dv This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.6504 - Wer Ortho: 0.3214 - Wer: 0.3247 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-------:|:----:|:---------------:|:---------:|:------:| | 0.0007 | 17.2414 | 500 | 0.6504 | 0.3214 | 0.3247 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
zelk12/MT-Merge-BB-gemma-2-MT5MT3-9B
zelk12
2024-10-22T14:14:31Z
5
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:zelk12/MT3-gemma-2-9B", "base_model:merge:zelk12/MT3-gemma-2-9B", "base_model:zelk12/MT5-gemma-2-9B", "base_model:merge:zelk12/MT5-gemma-2-9B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T14:07:45Z
--- base_model: - zelk12/MT3-gemma-2-9B - zelk12/MT5-gemma-2-9B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [zelk12/MT3-gemma-2-9B](https://huggingface.co/zelk12/MT3-gemma-2-9B) * [zelk12/MT5-gemma-2-9B](https://huggingface.co/zelk12/MT5-gemma-2-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: zelk12/MT5-gemma-2-9B - model: zelk12/MT3-gemma-2-9B merge_method: slerp base_model: zelk12/MT5-gemma-2-9B dtype: bfloat16 parameters: t: 0.5 ```
MarsupialAI/Lusca-33B
MarsupialAI
2024-10-22T14:07:51Z
18
10
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "fr", "de", "es", "it", "pt", "zh", "ja", "ru", "ko", "base_model:mistralai/Mistral-Small-Instruct-2409", "base_model:finetune:mistralai/Mistral-Small-Instruct-2409", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-07-26T16:30:06Z
--- language: - en - fr - de - es - it - pt - zh - ja - ru - ko license: other license_name: mrl license_link: https://mistral.ai/licenses/MRL-0.1.md base_model: - mistralai/Mistral-Small-Instruct-2409 --- # Lusca 33B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a531bc7ec6af0f95c707b1/2q5fTNYh4OGx-JxEI_A_5.png) This model is a merge of three self-stacked mistral-small-based models in a 33.1b (84 layer) configuration. This was intended to be merely a test of the viability of various stacking techniques for mistral small, but it turned out really well. Mergefuel: - TheDrummer/Cydonia-22B-v1 - gghfez/SeminalRP-22b - rAIfle/Acolyte-22B See recipe.txt for full details. This model is uncensored and perfectly capable of generating objectionable material. It does not seem to have a propensity to insert NSFW content into SFW prompts, but YMMV. As with any LLM, no factual claims made by the model should be taken at face value. You know that boilerplate safety disclaimer that most professional models have? Assume this has it too. This model is for entertainment purposes only. GGUFs: https://huggingface.co/MarsupialAI/Lusca-33B_iMat_GGUF EXL2: https://huggingface.co/MarsupialAI/Lusca-33B_EXL2 # Sample output ``` {{[INPUT]}} Write a detailed and humorous story about a cute and fluffy bunny that goes to a Gwar concert. {{[OUTPUT]}} In a quiet suburban neighborhood, there lived a most unusual rabbit named Binky. While other bunnies spent their days munching clover and frolicking in sunlit meadows, Binky had far more... extreme tastes. One fateful spring morning, as Binky nibbled on his favorite metal zine "Slaytanic Slaughterhouse", he gasped in ecstasy. There it was, in glorious gore-splattered color - a full-page ad for Scumdogs of the Universe's 30th Anniversary Bloodbath Tour! GWAR was coming to town, and Binky knew he would sell his soul (and possibly a kidney) to be there. The big night arrived. Binky, sporting a spiked leather harness and bullet belt, hopped nervously outside the decrepit Slaughterhouse Ballroom. A line of spike-haired, leather-clad freaks stretched around the block. But Binky held his head high, twitching his pink nose proudly. He belonged here. After a thorough pat-down by the bouncers ("Woah, this little guy is harder than nails!"), Binky charged into the fetid mosh pit like a fluffy pink tornado. The stench of sweat, cheap beer and week-old pizza crust assaulted his sensitive rabbit nose, but it only fueled his rage. This was his destiny! As the first crushing riff of "Bloodbath in Hell" exploded from the stage, Binky went absolutely berserk. He kicked, he bit, he gouged eyeballs with his razor-sharp claws. Screaming fans and security guards alike fled in terror before the unstoppable bunny death machine. Suddenly, a colossal clawed hand descended from above, scooping up the tiny thrashing furball. It was Oderus Urungus himself, lord of all he surveyed! "Hail Binky, rabbit of carnage!" the alien overlord proclaimed, raising Binky high for all to worship. "From this day forth, you shall be our new blood slave and personal groomer. Now polish my codpiece, my bunny!" And so, as fake blood rained down upon the adoring masses, Binky knew that he had finally found his true purpose. He was no longer just a cute and cuddly bunny - he was BINKY, DESTROYER OF WORLDS! And he would serve GWAR until the end of time... or at least until carrot snack time. ``` # Prompt Format Mistral v3 tekken ``` <s>[INST] user message[/INST] assistant message</s>[INST] new user message[/INST] ``` # What is the Lusca? The Lusca is a fearsome sea monster from Caribbean folklore. Envisioned as a monstrous hybrid of shark and octopus, or sometimes as a gargantuan squid, this creature is said to lurk in the depths of the ocean. Its immense size, often described as exceeding 75 feet, combined with its powerful tentacles and razor-sharp teeth, has made it a legendary figure of terror. The Lusca is blamed for disappearances of swimmers, divers, and even entire boats, its attacks often preceded by a disturbance of the water and a trail of bubbles. A symbol of the unknown dangers of the deep, the Lusca has captured the imagination of seafarers and landlubbers alike, ensuring its place as one of the most infamous cryptids of the Caribbean.
ctranslate2-4you/Mistral-Small-Instruct-2409-ct2-AWQ
ctranslate2-4you
2024-10-22T14:04:23Z
21
0
null
[ "safetensors", "mistral", "base_model:mistralai/Mistral-Small-Instruct-2409", "base_model:quantized:mistralai/Mistral-Small-Instruct-2409", "4-bit", "awq", "region:us" ]
null
2024-10-22T11:44:42Z
--- base_model: - mistralai/Mistral-Small-Instruct-2409 --- # Mistral-Small-Instruct CTranslate2 Model This repository contains a CTranslate2 version of the [Mistral-Small-Instruct model](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409). The conversion process involved AWQ quantization followed by CTranslate2 format conversion. ## Quantization Parameters The following AWQ parameters were used: ```zero_point=true``` ```q_group_size=128``` ```w_bit=4``` ```version=gemv``` ## Quantization Process The quantization was performed using the [AutoAWQ library](https://casper-hansen.github.io/AutoAWQ/examples/). AutoAWQ supports two quantization approaches: 1. **Without calibration data**: - Quick process (~few minutes) - Uses standard quantization schema - Suitable for general use cases 2. **With calibration data**: - Longer process (3-4 hours on RTX 4090) - Preserves full precision for task-specific weights - Slightly better performance for targeted tasks ## Calibration Details This model was quantized with calibration data. Specifically, the [cosmopedia-100k](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia-100k) dataset was used, which is good for overall QA and instruction-following. Key parameters: - `max_calib_seq_len`: 8192 (enables long-form responses) - `text_token_length`: 2048 (minimum input token length during quantization) While these parameters don't fundamentally alter the model's architecture, they fine-tune its behavior for specific input-output length patterns and topic domains. ## Requirements ```torch 2.2.2``` ```ctranslate2 4.4.0``` - NOTE: The soon-to-be-released ```ctranslate2 4.5.0``` will support ```torch``` greater than version 2.2.2. These instructions will be updated when that occurs. ## Sample Script ``` import os import sys import ctranslate2 import gc import torch from transformers import AutoTokenizer system_message = "You are a helpful person who answers questions." user_message = "Hello, how are you today? I'd like you to write me a funny poem that is a parody of Milton's Paradise Lost if you are familiar with that famous epic poem?" model_dir = r"D:\Scripts\bench_chat\models\mistralai--Mistral-Small-Instruct-2409-AWQ-ct2-awq" # uses ~13.8 GB def build_prompt_mistral_small(): prompt = f"""<s> [INST] {system_message} {user_message}[/INST]""" return prompt def main(): model_name = os.path.basename(model_dir) print(f"\033[32mLoading the model: {model_name}...\033[0m") intra_threads = max(os.cpu_count() - 4, 4) generator = ctranslate2.Generator( model_dir, device="cuda", # compute_type="int8_bfloat16", # NOTE...YOU DO NOT USE THIS AT ALL WHEN USING AWQ/CTRANSLATE2 MODELS intra_threads=intra_threads ) tokenizer = AutoTokenizer.from_pretrained(model_dir, add_prefix_space=None) prompt = build_prompt_mistral_small() tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)) print(f"\nRun 1 (Beam Size: {beam_size}):") results_batch = generator.generate_batch( [tokens], include_prompt_in_result=False, max_batch_size=4096, batch_type="tokens", beam_size=1, num_hypotheses=1, max_length=512, sampling_temperature=0.0, ) output = tokenizer.decode(results_batch[0].sequences_ids[0]) print("\nGenerated response:") print(output) del generator del tokenizer torch.cuda.empty_cache() gc.collect() if __name__ == "__main__": main() ```
steffygreypaul/Experiment46
steffygreypaul
2024-10-22T14:02:16Z
118
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T14:00:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
b-brave/asr_double_training_15-10-2024_merged
b-brave
2024-10-22T13:52:14Z
96
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-22T13:50:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rjhugs/modelStructure_TT_SBL_v6
rjhugs
2024-10-22T13:47:44Z
6
0
null
[ "pytorch", "table-transformer", "generated_from_trainer", "base_model:microsoft/table-transformer-structure-recognition-v1.1-all", "base_model:finetune:microsoft/table-transformer-structure-recognition-v1.1-all", "license:mit", "region:us" ]
null
2024-10-22T11:15:40Z
--- license: mit base_model: microsoft/table-transformer-structure-recognition-v1.1-all tags: - generated_from_trainer model-index: - name: modelStructure_TT_SBL_v6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # modelStructure_TT_SBL_v6 This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition-v1.1-all](https://huggingface.co/microsoft/table-transformer-structure-recognition-v1.1-all) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
zelk12/MT-Merge-IF-gemma-2-MT5MT-9B
zelk12
2024-10-22T13:45:31Z
5
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:zelk12/MT-gemma-2-9B", "base_model:merge:zelk12/MT-gemma-2-9B", "base_model:zelk12/MT5-gemma-2-9B", "base_model:merge:zelk12/MT5-gemma-2-9B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T13:39:13Z
--- base_model: - zelk12/MT5-gemma-2-9B - zelk12/MT-gemma-2-9B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [zelk12/MT5-gemma-2-9B](https://huggingface.co/zelk12/MT5-gemma-2-9B) * [zelk12/MT-gemma-2-9B](https://huggingface.co/zelk12/MT-gemma-2-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: zelk12/MT5-gemma-2-9B - model: zelk12/MT-gemma-2-9B merge_method: slerp base_model: zelk12/MT5-gemma-2-9B dtype: bfloat16 parameters: t: 0.5 ```
prabakar2307/layoutlm-funsd-tf
prabakar2307
2024-10-22T13:45:08Z
59
0
transformers
[ "transformers", "tf", "layoutlm", "token-classification", "generated_from_keras_callback", "base_model:microsoft/layoutlm-base-uncased", "base_model:finetune:microsoft/layoutlm-base-uncased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-10-22T13:44:20Z
--- base_model: microsoft/layoutlm-base-uncased library_name: transformers license: mit tags: - generated_from_keras_callback model-index: - name: layoutlm-funsd-tf results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm-funsd-tf This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2459 - Validation Loss: 0.6795 - Train Overall Precision: 0.7276 - Train Overall Recall: 0.7812 - Train Overall F1: 0.7534 - Train Overall Accuracy: 0.8032 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Train Overall Precision | Train Overall Recall | Train Overall F1 | Train Overall Accuracy | Epoch | |:----------:|:---------------:|:-----------------------:|:--------------------:|:----------------:|:----------------------:|:-----:| | 1.7017 | 1.4088 | 0.2201 | 0.2373 | 0.2284 | 0.4952 | 0 | | 1.1605 | 0.8580 | 0.5955 | 0.6849 | 0.6371 | 0.7371 | 1 | | 0.7603 | 0.6747 | 0.6574 | 0.7220 | 0.6882 | 0.7808 | 2 | | 0.5694 | 0.6343 | 0.6674 | 0.7551 | 0.7086 | 0.7919 | 3 | | 0.4494 | 0.6429 | 0.6828 | 0.7787 | 0.7276 | 0.7887 | 4 | | 0.3628 | 0.6226 | 0.7220 | 0.7752 | 0.7476 | 0.8112 | 5 | | 0.3092 | 0.6537 | 0.7172 | 0.7837 | 0.7490 | 0.8010 | 6 | | 0.2459 | 0.6795 | 0.7276 | 0.7812 | 0.7534 | 0.8032 | 7 | ### Framework versions - Transformers 4.44.2 - TensorFlow 2.17.0 - Datasets 3.0.1 - Tokenizers 0.19.1
cuongdev/vtthuc-v3
cuongdev
2024-10-22T13:43:32Z
27
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-10-22T13:38:17Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### vtthuc-v3 Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
pufanyi/SC4000_ConvNeXtV2_large_balanced_15500
pufanyi
2024-10-22T13:43:00Z
189
0
transformers
[ "transformers", "safetensors", "convnextv2", "image-classification", "dataset:pufanyi/cassava-leaf-disease-classification", "arxiv:1910.09700", "base_model:facebook/convnextv2-large-22k-384", "base_model:finetune:facebook/convnextv2-large-22k-384", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-10-22T13:37:43Z
--- library_name: transformers datasets: - pufanyi/cassava-leaf-disease-classification base_model: - facebook/convnextv2-large-22k-384 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mav23/falcon-mamba-7b-instruct-GGUF
mav23
2024-10-22T13:38:54Z
37
0
null
[ "gguf", "text-generation", "en", "dataset:tiiuae/falcon-refinedweb", "dataset:HuggingFaceFW/fineweb-edu", "arxiv:2312.00752", "arxiv:2410.05355", "base_model:tiiuae/falcon-mamba-7b", "base_model:quantized:tiiuae/falcon-mamba-7b", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-22T12:37:31Z
--- datasets: - tiiuae/falcon-refinedweb - HuggingFaceFW/fineweb-edu language: - en license: other license_name: falcon-mamba-7b-license license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html base_model: tiiuae/falcon-mamba-7b pipeline_tag: text-generation inference: true --- <img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/falcon_mamba/thumbnail.png" alt="drawing" width="800"/> **Model card for FalconMamba Instruct model** # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) # TL;DR # Model Details ## Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae) - **Model type:** Causal decoder-only - **Architecture:** Mamba - **Language(s) (NLP):** Mainly English - **License:** TII Falcon-Mamba License 2.0 <br> # Usage Find below some example scripts on how to use the model in `transformers` (Make sure to have the latest transformers, or the one built from source): ## Using the Pytorch model ### Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids, max_new_tokens=30) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct", device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids, max_new_tokens=30) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU using `torch.compile` <details> <summary> Click to expand </summary> ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct", torch_dtype=torch.bfloat16).to(0) model = torch.compile(model) # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids, max_new_tokens=30) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU using different precisions #### FP16 <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct", device_map="auto", torch_dtype=torch.float16) # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids, max_new_tokens=30) print(tokenizer.decode(outputs[0])) ``` </details> #### 4-bit <details> <summary> Click to expand </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct", device_map="auto", quantization_config=BitsAndBytesConfig(load_in_4bit=True)) # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids, max_new_tokens=30) print(tokenizer.decode(outputs[0])) ``` </details> <br> # Training Details ## Training Data Falcon-Mamba has been trained with ~ 5,500 GT mainly coming from [Refined-Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a large volume web-only dataset filtered and deduplicated. Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192. Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity. Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency. At the last training stage, small portion of high-quality curated data was used to further enhance performance. Overall, the data sources included RefinedWeb-English, high quality technical data, code data and math data extracted from public sources. In particular, we used samples coming from [Fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) during our last training stage. The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7B)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer. After pre-training, the model has been further fine-tuned on instruction data. ## Training Procedure Falcon-Mamba-7B was trained on 256 H100 80GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=1, PP=1, DP=256) combined with ZeRO. ### Training Hyperparameters | **Hyperparameter** | **Value** | **Comment** | |--------------------|------------|-------------------------------------------| | Precision | `bfloat16` | | | Optimizer | AdamW | | | Max learning rate | 6.4e-4 | Following a WSD (warmup-stable-decay) learning rate schedule | | Weight decay | 1e-1 | | | Batch size | 2048 | | The model was trained AdamW optimizer, WSD (warmup-stable-decay) learning rate schedule, and a batch size rampup from \\(b_{\mathrm{min}}=128\\) to \\(b_{\mathrm{max}}=2048\\) during first 50 GT of training. In the stable phase we used maximal learning rate \\(\eta_{\mathrm{max}}=6.4 \times 10^{-4}\\), and decayed it to the minimal value \\(\eta_{\mathrm{min}}=\frac{\eta_{\mathrm{max}}}{256}\\) with exponential schedule over 500 GT. Also, we applied *BatchScaling* during the rampup — rescaling learning rate \\(\eta\\) so that the Adam noise temperature \\(T_{\mathrm{noise}}\equiv\frac{\eta}{\sqrt{b}}\\) is kept constant. ### Speeds, Sizes, Times The model training took roughly two months. <br> # Evaluation ## Benchmarks We evaluate our model on all benchmarks of the new leaderboard's version using the `lm-evaluation-harness` package, and then normalize the evaluation results with HuggingFace score normalization. | `model name` |`IFEval`| `BBH` |`MATH LvL5`| `GPQA`| `MUSR`|`MMLU-PRO`|`Average`| |:--------------------------|:------:|:-----:|:---------:|:-----:|:-----:|:--------:|:-------:| | ***Pure SSM models*** | | | | | | | | | `FalconMamba-7B` | 33.36 | 19.88 | 3.63 |8.05 |10.86 | 14.47 |**15.04**| | `TRI-ML/mamba-7b-rw`<sup>*</sup>| 22.46 | 6.71 | 0.45 | 1.12 | 5.51 | 1.69 | 6.25 | |***Hybrid SSM-attention models*** | | | | | | | |`recurrentgemma-9b` | 30.76 | 14.80 | 4.83 | 4.70 | 6.60 | 17.88 | 13.20 | | `Zyphra/Zamba-7B-v1`<sup>*</sup> | 24.06 | 21.12 | 3.32 | 3.03 | 7.74 | 16.02 | 12.55 | |***Transformer models*** | | | | | | | | | `Falcon2-11B` | 32.61 | 21.94 | 2.34 | 2.80 | 7.53 | 15.44 | 13.78 | | `Meta-Llama-3-8B` | 14.55 | 24.50 | 3.25 | 7.38 | 6.24 | 24.55 | 13.41 | | `Meta-Llama-3.1-8B` | 12.70 | 25.29 | 4.61 | 6.15 | 8.98 | 24.95 | 13.78 | | `Mistral-7B-v0.1` | 23.86 | 22.02 | 2.49 | 5.59 | 10.68 | 22.36 | 14.50 | | `Mistral-Nemo-Base-2407 (12B)` | 16.83 | 29.37 | 4.98 | 5.82 | 6.52 | 27.46 | 15.08 | | `gemma-7B` | 26.59 | 21.12 | 6.42 | 4.92 | 10.98 | 21.64 |**15.28**| Also, we evaluate our model on the benchmarks of the first leaderboard using `lighteval`. | `model name` |`ARC`|`HellaSwag` |`MMLU` |`Winogrande`|`TruthfulQA`|`GSM8K`|`Average` | |:-----------------------------|:------:|:---------:|:-----:|:----------:|:----------:|:-----:|:----------------:| | ***Pure SSM models*** | | | | | | | | | `FalconMamba-7B`<sup>*</sup> | 62.03 | 80.82 | 62.11 | 73.64 | 53.42 | 52.54 | **64.09** | | `TRI-ML/mamba-7b-rw`<sup>*</sup> | 51.25 | 80.85 | 33.41 | 71.11 | 32.08 | 4.70 | 45.52 | |***Hybrid SSM-attention models***| | | | | | | | | `recurrentgemma-9b`<sup>**</sup> |52.00 | 80.40 | 60.50 | 73.60 | 38.60 | 42.60 | 57.95 | | `Zyphra/Zamba-7B-v1`<sup>*</sup> | 56.14 | 82.23 | 58.11 | 79.87 | 52.88 | 30.78 | 60.00 | |***Transformer models*** | | | | | | | | | `Falcon2-11B` | 59.73 | 82.91 | 58.37 | 78.30 | 52.56 | 53.83 | **64.28** | | `Meta-Llama-3-8B` | 60.24 | 82.23 | 66.70 | 78.45 | 42.93 | 45.19 | 62.62 | | `Meta-Llama-3.1-8B` | 58.53 | 82.13 | 66.43 | 74.35 | 44.29 | 47.92 | 62.28 | | `Mistral-7B-v0.1` | 59.98 | 83.31 | 64.16 | 78.37 | 42.15 | 37.83 | 60.97 | | `gemma-7B` | 61.09 | 82.20 | 64.56 | 79.01 | 44.79 | 50.87 | 63.75 | Mostly, we took evaluation results from both leaderboards. For the models marked by *star* we evaluated the tasks internally, while for the models marked by two *stars* the results were taken from paper or model card. ## Throughput This model can achieve comparable throughput and performance compared to other transformer based models that use optimized kernels such as Flash Attention 2. Make sure to install the optimized Mamba kernels with the following commands: ```bash pip install "causal-conv1d>=1.4.0" mamba-ssm ``` Refer to our [FalconMamba blogpost](https://huggingface.co/blog/falconmamba) for more details about performance evaluation. <br> # Technical Specifications ## Model Architecture and Objective Falcon-Mamba-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The model is based on the Mamba architecture ([Gu et al., 2023](https://arxiv.org/abs/2312.00752)). | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 64 | Number of layers | | `d_model` | 4096 | Hidden dimension | | `d_state` | 16 | The SSM state dimension | | Vocabulary | 65024 | Vocabulary Size | | Sequence length | 8192 | During the last training stages | ## Compute Infrastructure ### Hardware Falcon-Mamba-7B was trained on AWS SageMaker, using on average 256 H100 80GB GPUs in 32 p5 instances. ### Software Falcon-Mamba-7B was trained on an internal distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels. <br> # Citation You can use the following bibtex citation: ``` @misc{zuo2024falconmambacompetitiveattentionfree, title={Falcon Mamba: The First Competitive Attention-free 7B Language Model}, author={Jingwei Zuo and Maksim Velikanov and Dhia Eddine Rhaiem and Ilyas Chahed and Younes Belkada and Guillaume Kunsch and Hakim Hacid}, year={2024}, eprint={2410.05355}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.05355}, } ```
mradermacher/L3.1-Sigma-3B-ties-GGUF
mradermacher
2024-10-22T13:23:08Z
98
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-22T10:22:32Z
--- base_model: djuna-test-lab/L3.1-Sigma-3B-ties language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/djuna-test-lab/L3.1-Sigma-3B-ties <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3.1-Sigma-3B-ties-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3.1-Sigma-3B-ties-GGUF/resolve/main/L3.1-Sigma-3B-ties.Q2_K.gguf) | Q2_K | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-Sigma-3B-ties-GGUF/resolve/main/L3.1-Sigma-3B-ties.Q3_K_S.gguf) | Q3_K_S | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-Sigma-3B-ties-GGUF/resolve/main/L3.1-Sigma-3B-ties.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3.1-Sigma-3B-ties-GGUF/resolve/main/L3.1-Sigma-3B-ties.Q3_K_L.gguf) | Q3_K_L | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-Sigma-3B-ties-GGUF/resolve/main/L3.1-Sigma-3B-ties.IQ4_XS.gguf) | IQ4_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-Sigma-3B-ties-GGUF/resolve/main/L3.1-Sigma-3B-ties.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3.1-Sigma-3B-ties-GGUF/resolve/main/L3.1-Sigma-3B-ties.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3.1-Sigma-3B-ties-GGUF/resolve/main/L3.1-Sigma-3B-ties.Q5_K_S.gguf) | Q5_K_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-Sigma-3B-ties-GGUF/resolve/main/L3.1-Sigma-3B-ties.Q5_K_M.gguf) | Q5_K_M | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-Sigma-3B-ties-GGUF/resolve/main/L3.1-Sigma-3B-ties.Q6_K.gguf) | Q6_K | 2.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3.1-Sigma-3B-ties-GGUF/resolve/main/L3.1-Sigma-3B-ties.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3.1-Sigma-3B-ties-GGUF/resolve/main/L3.1-Sigma-3B-ties.f16.gguf) | f16 | 6.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->