Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{"license": "openrail"}
Homiebear/Brok
null
[ "license:openrail", "region:us" ]
null
2024-05-01T06:19:28+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-feedback This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6145 - Rouge1: 51.2809 - Rouge2: 27.3229 - Rougel: 49.2287 - Rougelsum: 49.211 - Gen Len: 10.1736 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 61 | 2.9832 | 24.9931 | 10.0881 | 21.9651 | 22.0687 | 16.4876 | | No log | 2.0 | 122 | 2.1822 | 36.3348 | 17.5969 | 34.3034 | 34.2834 | 12.1653 | | No log | 3.0 | 183 | 1.9607 | 43.7295 | 21.5907 | 41.8815 | 41.929 | 10.5372 | | No log | 4.0 | 244 | 1.8412 | 48.7074 | 25.1744 | 46.8382 | 46.8399 | 10.405 | | No log | 5.0 | 305 | 1.7674 | 50.1972 | 26.4116 | 48.1456 | 48.0538 | 10.2066 | | No log | 6.0 | 366 | 1.7195 | 51.0984 | 27.8685 | 48.9483 | 49.0108 | 10.3554 | | No log | 7.0 | 427 | 1.6832 | 50.272 | 27.3168 | 48.4083 | 48.4307 | 10.0331 | | No log | 8.0 | 488 | 1.6558 | 50.6829 | 27.5132 | 48.6684 | 48.735 | 10.2727 | | 2.363 | 9.0 | 549 | 1.6357 | 50.0286 | 27.0674 | 48.0211 | 48.0783 | 10.1736 | | 2.363 | 10.0 | 610 | 1.6240 | 50.8207 | 26.8345 | 48.6528 | 48.6903 | 10.1983 | | 2.363 | 11.0 | 671 | 1.6166 | 50.9796 | 27.0236 | 48.8888 | 48.8958 | 10.1901 | | 2.363 | 12.0 | 732 | 1.6145 | 51.2809 | 27.3229 | 49.2287 | 49.211 | 10.1736 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-small", "model-index": [{"name": "t5-small-finetuned-feedback", "results": []}]}
phdreg/t5-small-finetuned-feedback
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:19:45+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # superrep-mail This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4050 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1186 | 1.0 | 1 | 2.7996 | | 0.6556 | 1.3333 | 2 | 2.4050 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "superrep-mail", "results": []}]}
GeekRoom/superrep-mail
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-05-01T06:22:20+00:00
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: AlkQ/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
AlkQ/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-05-01T06:22:52+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0001_withdpo_4iters_bs256_533lr_iter_4 This model is a fine-tuned version of [ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3](https://huggingface.co/ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3", "model-index": [{"name": "0.0001_withdpo_4iters_bs256_533lr_iter_4", "results": []}]}
ShenaoZ/0.0001_withdpo_4iters_bs256_533lr_iter_4
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:23:02+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # advsafe-spin-iter0 This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "alignment-handbook/zephyr-7b-sft-full", "model-index": [{"name": "advsafe-spin-iter0", "results": []}]}
AmberYifan/advsafe-spin-iter0
null
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:24:02+00:00
automatic-speech-recognition
transformers
{}
cportoca/whisper-tiny-finetune
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:24:06+00:00
null
null
{}
cloudnbits/Phi-3-mini-4k-instruct-dml-int4-onnx
null
[ "onnx", "region:us" ]
null
2024-05-01T06:24:08+00:00
text-generation
transformers
{}
HavryliukA/llama2_megogo_new_prompt_1204_100docs_0105_35epochs_MERGED
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:24:10+00:00
null
transformers
{"license": "apache-2.0"}
songzewu/vasista22-whisper-hindi-small-ct2
null
[ "transformers", "pytorch", "jax", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:24:11+00:00
image-classification
transformers
{}
AP45345/New_sec_Model
null
[ "transformers", "safetensors", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us", "has_space" ]
null
2024-05-01T06:25:00+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) starchat2-15b-v0.1 - bnb 8bits - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1/ Original model description: --- base_model: HuggingFaceH4/starchat2-15b-sft-v0.1 tags: - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized - HuggingFaceH4/orca_dpo_pairs model-index: - name: starchat2-15b-v0.1 results: [] --- <img src="https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1/resolve/main/model_logo.png" alt="StarChat2 15B Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for StarChat2 15B StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat2 is the latest model in the series, and is a fine-tuned version of [StarCoder2](https://huggingface.co/bigcode/starcoder2-15b) that was trained with SFT and DPO on a mix of synthetic datasets. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Model type:** A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English and 600+ programming languages. - **License:** BigCode Open RAIL-M v1 - **Finetuned from model:** [bigcode/starcoder2-15b](https://huggingface.co/bigcode/starcoder2-15b) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook - **Demo:** https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground ## Performance StarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [IFEval](https://arxiv.org/abs/2311.07911), as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the [LightEval](https://github.com/huggingface/lighteval) evaluation suite (commit `988959cb905df4baa050f82b4d499d46e8b537f2`) and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. | Model | MT Bench | IFEval | HumanEval | |-------------------------------------------------------------------------------------------------|---------:|-------:|----------:| | [starchat2-15b-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1) | 7.66 | 35.12 | 71.34 | | [deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) | 4.17 | 14.23 | 80.48 | | [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | 6.80 | 43.44 | 50.60 | ## Intended uses & limitations The model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground) to test its coding capabilities. Here's how you can run the model using the `pipeline()` function from ๐Ÿค— Transformers: ```python # pip install 'transformers @ git+https://github.com/huggingface/transformers.git@831bc25d8fdb85768402f772cf65cc3d7872b211' # pip install accelerate import torch from transformers import pipeline pipe = pipeline( "text-generation", model="HuggingFaceH4/starchat2-15b-v0.1", device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ { "role": "system", "content": "You are StarChat2, an expert programming assistant", }, {"role": "user", "content": "Write a simple website in HTML. When a user clicks the button, it shows a random Chuck Norris joke."}, ] outputs = pipe( messages, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, stop_sequence="<|im_end|>", ) print(outputs[0]["generated_text"][-1]["content"]) ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> StarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the [StarCoder2 dataset](https://huggingface.co/datasets/bigcode/the-stack-v2) Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. For example, it may produce code that does not compile or that produces incorrect results. It may also produce code that is vulnerable to security exploits. We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking. StarChat2 15B was fine-tuned from the base model [StarCoder2](https://huggingface.co/bigcode/starcoder2-15b), please refer to its model card's [Limitations Section](https://huggingface.co/bigcode/starcoder2-15b#limitations) for relevant information. In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its [technical report](https://huggingface.co/papers/2402.19173). ## Training details This model is a fine-tuned version of [starchat2-15b-sft-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-sft-v0.1) on the HuggingFaceH4/ultrafeedback_binarized and the HuggingFaceH4/orca_dpo_pairs datasets. Check out the recipe in the [Alignment Handbook](https://github.com/huggingface/alignment-handbook) for more details. It achieves the following results on the evaluation set: - Loss: 0.4347 - Rewards/chosen: -0.9461 - Rewards/rejected: -2.7745 - Rewards/accuracies: 0.7658 - Rewards/margins: 1.8284 - Logps/rejected: -322.1934 - Logps/chosen: -316.1898 - Logits/rejected: -2.3817 - Logits/chosen: -2.3005 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.717 | 0.17 | 100 | 0.6006 | -0.0924 | -0.2899 | 0.6329 | 0.1975 | -272.5022 | -299.1165 | -2.5313 | -2.4191 | | 0.6273 | 0.35 | 200 | 0.5160 | -0.3994 | -0.9461 | 0.6930 | 0.5467 | -285.6261 | -305.2568 | -2.5281 | -2.4278 | | 0.5538 | 0.52 | 300 | 0.4781 | -0.6589 | -1.5892 | 0.7247 | 0.9302 | -298.4870 | -310.4470 | -2.4996 | -2.4110 | | 0.5056 | 0.7 | 400 | 0.4594 | -0.8283 | -2.1332 | 0.7437 | 1.3050 | -309.3687 | -313.8344 | -2.4472 | -2.3644 | | 0.4983 | 0.87 | 500 | 0.4512 | -0.7758 | -2.2806 | 0.7468 | 1.5049 | -312.3167 | -312.7843 | -2.4223 | -2.3404 | | 0.4662 | 1.04 | 600 | 0.4431 | -0.7839 | -2.4016 | 0.7658 | 1.6177 | -314.7355 | -312.9465 | -2.4049 | -2.3215 | | 0.4411 | 1.22 | 700 | 0.4415 | -1.0090 | -2.7582 | 0.7690 | 1.7492 | -321.8679 | -317.4481 | -2.3840 | -2.3016 | | 0.471 | 1.39 | 800 | 0.4368 | -0.9617 | -2.7445 | 0.7690 | 1.7828 | -321.5930 | -316.5019 | -2.3809 | -2.2991 | | 0.4485 | 1.57 | 900 | 0.4351 | -0.9490 | -2.7594 | 0.7722 | 1.8103 | -321.8916 | -316.2497 | -2.3815 | -2.3004 | | 0.4411 | 1.74 | 1000 | 0.4348 | -0.9293 | -2.7469 | 0.7658 | 1.8176 | -321.6409 | -315.8547 | -2.3823 | -2.3011 | | 0.4499 | 1.92 | 1100 | 0.4348 | -0.9482 | -2.7767 | 0.7658 | 1.8285 | -322.2369 | -316.2320 | -2.3828 | -2.3012 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
{}
RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-8bits
null
[ "transformers", "safetensors", "starcoder2", "text-generation", "conversational", "arxiv:2311.07911", "arxiv:2402.19173", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-05-01T06:25:19+00:00
null
null
{"license": "unknown"}
hautc/X3
null
[ "license:unknown", "region:us" ]
null
2024-05-01T06:26:52+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) starchat2-15b-sft-v0.1 - bnb 4bits - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/starchat2-15b-sft-v0.1/ Original model description: --- license: bigcode-openrail-m base_model: bigcode/starcoder2-15b tags: - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/airoboros-3.2 - HuggingFaceH4/Code-Feedback - HuggingFaceH4/orca-math-word-problems-200k - HuggingFaceH4/SystemChat - HuggingFaceH4/capybara model-index: - name: starcoder2-15b-sft-v5.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Model Card for starchat2-15b-sft-v0.1 This model is a fine-tuned version of [bigcode/starcoder2-15b](https://huggingface.co/bigcode/starcoder2-15b) on the HuggingFaceH4/airoboros-3.2, the HuggingFaceH4/Code-Feedback, the HuggingFaceH4/orca-math-word-problems-200k, the HuggingFaceH4/SystemChat and the HuggingFaceH4/capybara datasets. It achieves the following results on the evaluation set: - Loss: 0.6614 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6422 | 1.0 | 910 | 0.6910 | | 0.5701 | 2.0 | 1820 | 0.6639 | | 0.5227 | 3.0 | 2730 | 0.6614 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
{}
RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-4bits
null
[ "transformers", "safetensors", "starcoder2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-01T06:27:08+00:00
text-to-audio
transformers
{}
mikhail-panzo/zlm_b128_le4_s12000
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:27:18+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-stock-tweet-sentiment-analysis This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5815 - Accuracy: 0.781 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7065 | 1.0 | 1000 | 0.5816 | 0.7628 | | 0.4915 | 2.0 | 2000 | 0.5666 | 0.7762 | | 0.3766 | 3.0 | 3000 | 0.5815 | 0.781 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-stock-tweet-sentiment-analysis", "results": []}]}
elitenandu/distilbert-stock-tweet-sentiment-analysis
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:27:20+00:00
null
null
{}
cloudnbits/Phi-3-mini-4k-instruct-dml-fp16-onnx
null
[ "onnx", "region:us" ]
null
2024-05-01T06:27:27+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-1 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-1", "results": []}]}
AlignmentResearch/robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-1
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:27:44+00:00
null
null
{}
cloudnbits/Phi-3-mini-128k-instruct-dml-int4-onnx
null
[ "onnx", "region:us" ]
null
2024-05-01T06:28:43+00:00
null
null
{}
cloudnbits/Phi-3-mini-128k-instruct-dml-fp16-onnx
null
[ "onnx", "region:us" ]
null
2024-05-01T06:29:04+00:00
null
null
{}
TimothyTheSpartan/GeorgeEdd
null
[ "region:us" ]
null
2024-05-01T06:29:46+00:00
object-detection
transformers
{"license": "mit", "pipeline_tag": "object-detection"}
underthelights/robocup2024_yolov7_exp_240407
null
[ "transformers", "object-detection", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:32:17+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - embracellm/sushi16_LoRA <Gallery /> ## Model description These are embracellm/sushi16_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Salmon Philly Salad Roll to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](embracellm/sushi16_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of Salmon Philly Salad Roll ", "widget": []}
embracellm/sushi16_LoRA
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-05-01T06:34:28+00:00
text-generation
transformers
<img src="./ninjalogo.svg" width="100%" height="20%" alt=""> - [Ninja-v1](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1) ใฎGGUF็‰ˆ # Our Models for GGUF - [Vecteus-GGUF](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1-gguf) - [Ninja-v1-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-GGUF) - [Ninja-v1-NSFW-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-GGUF) - [Ninja-v1-128k-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-128k-GGUF) - [Ninja-v1-NSFW-128k-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-128k-GGUF)
{"language": ["en", "ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["finetuned"], "pipeline_tag": "text-generation"}
Local-Novel-LLM-project/Ninja-v1-GGUF
null
[ "transformers", "gguf", "finetuned", "text-generation", "en", "ja", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:35:09+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-2-intentmodel This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "phi-2-intentmodel", "results": []}]}
Mohit-Rai-402/phi-2-intentmodel
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-05-01T06:37:14+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-125m-finetuned-rte This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0060 - Accuracy: 0.4729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.9768 | 0.4765 | | No log | 2.0 | 2 | 0.9663 | 0.4729 | | No log | 3.0 | 3 | 0.9564 | 0.4729 | | No log | 4.0 | 4 | 0.9482 | 0.4729 | | No log | 5.0 | 5 | 0.9415 | 0.4693 | | No log | 6.0 | 6 | 0.9357 | 0.4693 | | No log | 7.0 | 7 | 0.9311 | 0.4693 | | No log | 8.0 | 8 | 0.9275 | 0.4693 | | No log | 9.0 | 9 | 0.9251 | 0.4693 | | No log | 10.0 | 10 | 0.9239 | 0.4693 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "facebook/opt-125m", "model-index": [{"name": "opt-125m-finetuned-rte", "results": []}]}
elliottfitzgerald/opt-125m-finetuned-rte
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:facebook/opt-125m", "license:other", "region:us" ]
null
2024-05-01T06:37:46+00:00
null
null
{}
luciusy/ts_planadd_pp2
null
[ "region:us" ]
null
2024-05-01T06:38:32+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cnn_news_summary_model_trained_on_reduced_data This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6040 - Rouge1: 0.2177 - Rouge2: 0.0941 - Rougel: 0.1839 - Rougelsum: 0.184 - Generated Length: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:| | No log | 1.0 | 431 | 1.6239 | 0.2174 | 0.0935 | 0.183 | 0.183 | 19.0 | | 1.92 | 2.0 | 862 | 1.6075 | 0.2168 | 0.0933 | 0.1828 | 0.1829 | 19.0 | | 1.8221 | 3.0 | 1293 | 1.6040 | 0.2177 | 0.0941 | 0.1839 | 0.184 | 19.0 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-small", "model-index": [{"name": "cnn_news_summary_model_trained_on_reduced_data", "results": []}]}
minhaz21/cnn_news_summary_model_trained_on_reduced_data
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:38:52+00:00
text-generation
transformers
# DcardStylePost-7B-Instruct-v1_0 DcardStylePost-7B-Instruct-v1_0 is a experimental finetuned from Breeze-7B-Instruct-v1_0 ## ๐Ÿ’ป Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "JiunYi/DcardStylePost-7B-Instruct-v1_0" messages = [{"role": "user", "content": "ใ€ไปปๅ‹™ใ€‘๏ผš\nไฝ ๆ˜ฏๅฃ็ข‘่กŒ้Šทๅ…ฌๅธ็š„่ณ‡ๆทฑๅฏซๆ‰‹๏ผŒ็พๅœจ่ฆๆ’ฐๅฏซไธ€็ฏ‡Dcard ใ€Ž็พŽๅฆ็‰ˆใ€็š„ๅฃ็ข‘ๆ–‡ใ€‚\n้บป็…ฉ็†Ÿ่ฎ€ไธ‹ๅˆ— ใ€ๅƒ่€ƒ่ณ‡ๆ–™ใ€‘ๅฐ‡ๅ•†ๅ“ๆˆ–ๅ“็‰Œ่ˆ‡็”Ÿๆดปๆƒ…ๅขƒ็š„่žๅˆ๏ผŒไธฆ็›ก้‡่ฒผ่ฟ‘็”Ÿๆดปๅฃๅป๏ผŒไปฅไธ่ฆ่ขซ็™ผ็พๆ˜ฏๆฅญ้…ๆ–‡็‚บ้‡่ฆ็›ฎๆจ™ใ€‚\n่ซ‹ไพ็…งใ€ๅฃ็ข‘ๅˆ‡่ง’ใ€‘็‚บๆ–‡็ซ ็š„ๆ•…ไบ‹ไธป่ปธๆ’ฐๅฏซๆ•ด็ฏ‡ๆ–‡็ซ ๏ผŒ่ซ‹ๅƒ่€ƒไธฆไพ็…งใ€่ฆ็ฏ„ใ€‘่ง’่‰ฒๅฃๅปไพ†ๆ’ฐๅฏซๆ–‡็ซ ๏ผŒ่žๅ…ฅ่ง’่‰ฒๆƒ…ๅขƒไพ†ๅฎŒๆˆๆ•ด็ฏ‡ๆ•…ไบ‹ๆ’ฐๅฏซ๏ผŒ่ซ‹ๆณจ้‡ไบบ็‰ฉ่ง’่‰ฒ็‰น่‰ฒๅŠ้™ๅˆถใ€‚\n\n๏ผ\n\nใ€่ฆ็ฏ„ใ€‘๏ผš\n\n1.้œ€็”ขๅ‡บๆ–‡็ซ ๆจ™้กŒ\n2.่ซ‹ไปฅ็ฌฌไธ€ไบบ็จฑๆ–นๅผๆ’ฐๅฏซๆ–‡็ซ \n3.่ซ‹่จ˜ไฝ็พๅœจๆ˜ฏๆ™‚้–“ๆ˜ฏ่ฅฟๅ…ƒ 2023 ๅนด\n4.Please write in zh-TW language .\n5.้™ไฝŽๅฃ็ข‘ๆ–‡ๆฅญ้…ๆ„Ÿ\n6.22~24ๆญฒ\n7.ๆ’ฐๅฏซ่ง’่‰ฒ็”Ÿ็†ๆ€งๅˆฅ็‚บๅฅณๆ€ง\n8.ไนพๆ€ง็šฎ่†š\n9.ๆณจ้‡็šฎ่†šไฟๆบผ\n10.ๅธธๅธธๅพ…ๅœจๅฎคๅ…ง\n๏ผ\n\nใ€ๅƒ่€ƒ่ณ‡ๆ–™ใ€‘\n\nๆ–‡ๆฃฎๅ…ˆ็”Ÿ๏ฝœๅฐๅทงๆฟ•ๆฝคๆพคๅ™ด้œง\nไธปๆ‰“ๆฏ่Š่Šฑใ€้‡‘็ธทๆข…ไฟๆฟ•่ˆ’็ทฉ๏ผŒๅŠ ไธŠๅปฃๅ‘Š่ ปๅธธ็œ‹ๅˆฐ็š„ๅฐฑ่ฒทไพ†่ฉฆ่ฉฆ\nๆ˜ฏใ€Œๆฐดๅซฉใ€็š„ไฟๆฟ•ๆ„Ÿ๏ผŒ่ ปๆธ…็ˆฝใ€ๅธๆ”ถๅฎŒ็šฎ่†šๆœƒๅซฉๅซฉ็š„๏ผ\nๅพˆๅƒๅ‰›ๆ•ทๅฎŒ้ข่†œ็š„ๆ„Ÿ่ฆบ๏ผŒๅฏไปฅ็•ถๅฆๅ‰่ถ•ๆ™‚้–“็š„้€Ÿๆ•ˆ้ข่†œ๏ผˆ๏ผŸ\nๅฐฑๅฏไปฅ่ฎ“ๅฆๆ•ˆ่ ปๆœ่ฒผๆŒไน…็š„๏ผ\nๆญ้…ๆŒไน…ๅž‹็š„็ฒ‰ๅบ•ๆถฒไฝฟ็”จ๏ผŒไฟๆฟ•ๆœ่ฒผ็š„ๆ•ˆๆžœๆ›ดๆ˜Ž้กฏ๏ผ\nๆทปๅŠ ้‡‘็ธทๆข…ใ€ๆฏ่Š่Šฑใ€ๅ†ฐๆฒณๆฐด๏ผŒ่ˆ’็ทฉไฟๆฟ•\nๅนซๅŠฉๆฐดๆฒนๅนณ่กก๏ผŒไฝฟ่‚Œ่†šๆดปๅŠ›้€ไบฎ\n่ฎ“ไนพ่‚Œ็พๆฐด่‚Œ ่‡‰้ƒจๆฒนๅ…‰่ฎŠๆฐดๅ…‰\r\n็„กๆทปๅŠ ๏ผš้…’็ฒพ๏ผ้ฆ™็ฒพ๏ผๆฒน่„‚\nๆƒณๆ€Ž้บผ็”จๅฐฑๆ€Ž้บผ็”จ\n\n๏ผ\n\nใ€ๅฃ็ข‘ๅˆ‡่ง’ใ€‘\nๆœ€่ฟ‘ๅ› ็‚บๆ›ๅญฃ่‡‰ๅพˆไนพ็‡ฅ๏ผŒๆ‰€ไปฅๅพˆๅ–œๆญก็”จไฟๆฟ•ๅ™ด้œง๏ผˆๆถผ็ˆฝๅˆไฟๆฟ•๏ผ‰ๆ‰‹้‚Šๆœ‰ๅนพๆฌพ๏ผŒไฝ†ๅฏไปฅๆ˜Ž้กฏๆ„Ÿ่ฆบๅพ—ๅˆฐๆœ‰ไบ›ๅ™ดๅฎŒๅพˆ้›ž่‚‹ใ€ๆœ‰ไบ›็œŸ็š„ๅฏไปฅไฟๆฟ•๏ผŒไธŠ็ถฒ็ˆฌๆ–‡ๆ‰็™ผ็พๅ„ๅ“็‰Œ็š„ๅทฎ็•ฐ๏ผŒๆ–ผๆ˜ฏๆƒณไพ†ๅšไธ€ๅ€‹่ช็œŸ็š„ๅˆ†ๆž๏ผˆๅŒ…ๅซๅ™ด้œง็ดฐ็ทปๅบฆใ€้…ธ้นผๅบฆใ€็ฒ˜่†ฉ็จ‹ๅบฆใ€ๅธๆ”ถๅบฆ็ญ‰๏ผ‰"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"language": ["zh"], "license": "gpl-3.0", "tags": ["art", "marketing", "llama-factory"], "metrics": ["bleu"], "base_model": "MediaTek-Research/Breeze-7B-Instruct-v1_0"}
JiunYi/DcardStylePost-7B-Instruct-v1_0
null
[ "transformers", "safetensors", "mistral", "text-generation", "art", "marketing", "llama-factory", "conversational", "zh", "base_model:MediaTek-Research/Breeze-7B-Instruct-v1_0", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:39:13+00:00
text-generation
transformers
<img src="./veteus_logo.svg" width="100%" height="20%" alt=""> - Vecteus-v1ใฎGGUF็‰ˆ # Our Models for GGUF - [Vecteus](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1-gguf) - [Ninja-v1](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-GGUF) - [Ninja-v1-NSFW](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-GGUF)
{"language": ["en", "ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["finetuned"], "pipeline_tag": "text-generation"}
Local-Novel-LLM-project/Vecteus-v1-gguf
null
[ "transformers", "gguf", "finetuned", "text-generation", "en", "ja", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:39:28+00:00
text-to-audio
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fil_b32_le5_s8000 This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4039 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 8000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:--------:|:----:|:---------------:| | 0.632 | 11.1111 | 500 | 0.5323 | | 0.519 | 22.2222 | 1000 | 0.4494 | | 0.4816 | 33.3333 | 1500 | 0.4291 | | 0.481 | 44.4444 | 2000 | 0.4211 | | 0.4459 | 55.5556 | 2500 | 0.4139 | | 0.4484 | 66.6667 | 3000 | 0.4114 | | 0.4317 | 77.7778 | 3500 | 0.4081 | | 0.4301 | 88.8889 | 4000 | 0.4076 | | 0.4274 | 100.0 | 4500 | 0.4059 | | 0.4323 | 111.1111 | 5000 | 0.4062 | | 0.4189 | 122.2222 | 5500 | 0.4045 | | 0.4272 | 133.3333 | 6000 | 0.4059 | | 0.4219 | 144.4444 | 6500 | 0.4058 | | 0.4125 | 155.5556 | 7000 | 0.4049 | | 0.42 | 166.6667 | 7500 | 0.4046 | | 0.4145 | 177.7778 | 8000 | 0.4039 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "fil_b32_le5_s8000", "results": []}]}
mikhail-panzo/fil_b32_le5_s8000
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:40:30+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) starchat2-15b-sft-v0.1 - bnb 8bits - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/starchat2-15b-sft-v0.1/ Original model description: --- license: bigcode-openrail-m base_model: bigcode/starcoder2-15b tags: - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/airoboros-3.2 - HuggingFaceH4/Code-Feedback - HuggingFaceH4/orca-math-word-problems-200k - HuggingFaceH4/SystemChat - HuggingFaceH4/capybara model-index: - name: starcoder2-15b-sft-v5.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Model Card for starchat2-15b-sft-v0.1 This model is a fine-tuned version of [bigcode/starcoder2-15b](https://huggingface.co/bigcode/starcoder2-15b) on the HuggingFaceH4/airoboros-3.2, the HuggingFaceH4/Code-Feedback, the HuggingFaceH4/orca-math-word-problems-200k, the HuggingFaceH4/SystemChat and the HuggingFaceH4/capybara datasets. It achieves the following results on the evaluation set: - Loss: 0.6614 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6422 | 1.0 | 910 | 0.6910 | | 0.5701 | 2.0 | 1820 | 0.6639 | | 0.5227 | 3.0 | 2730 | 0.6614 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
{}
RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-8bits
null
[ "transformers", "safetensors", "starcoder2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-05-01T06:40:57+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jamesohe/casaudit3-4bit-p03-adapter
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:41:21+00:00
null
null
{"license": "other", "license_name": "hassan", "license_link": "LICENSE"}
Hassan-khalaf/hassan.khalaf
null
[ "license:other", "region:us" ]
null
2024-05-01T06:42:13+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-1.3b-finetuned-rte This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7779 - Accuracy: 0.4477 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.7844 | 0.4765 | | No log | 2.0 | 2 | 0.7837 | 0.4801 | | No log | 3.0 | 3 | 0.7822 | 0.4910 | | No log | 4.0 | 4 | 0.7828 | 0.4838 | | No log | 5.0 | 5 | 0.7828 | 0.4838 | | No log | 6.0 | 6 | 0.7822 | 0.4838 | | No log | 7.0 | 7 | 0.7820 | 0.4801 | | No log | 8.0 | 8 | 0.7817 | 0.4838 | | No log | 9.0 | 9 | 0.7814 | 0.4874 | | No log | 10.0 | 10 | 0.7815 | 0.4874 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "facebook/opt-1.3b", "model-index": [{"name": "opt-1.3b-finetuned-rte", "results": []}]}
elliottfitzgerald/opt-1.3b-finetuned-rte
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:facebook/opt-1.3b", "license:other", "region:us" ]
null
2024-05-01T06:44:53+00:00
null
null
{}
Mohamedshaaban2001/qwen1.5-llm
null
[ "region:us" ]
null
2024-05-01T06:45:11+00:00
text-to-audio
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
procit001/dutch_female_2024_spk_5
null
[ "transformers", "safetensors", "vits", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:48:07+00:00
text-generation
transformers
{}
Rekha208/Llama-2-7b-chat-finetune
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:48:41+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-finetuned-feedback This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2738 - Rouge1: 55.9578 - Rouge2: 31.3401 - Rougel: 52.9556 - Rougelsum: 53.1034 - Gen Len: 10.2562 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 61 | 1.4341 | 53.0065 | 28.3454 | 50.4489 | 50.5377 | 9.3388 | | No log | 2.0 | 122 | 1.3604 | 53.2275 | 28.6424 | 50.6585 | 50.7617 | 9.8182 | | No log | 3.0 | 183 | 1.3207 | 52.7581 | 28.6272 | 49.6977 | 49.7928 | 10.0661 | | No log | 4.0 | 244 | 1.3098 | 53.5227 | 28.6578 | 50.2637 | 50.2897 | 9.9752 | | No log | 5.0 | 305 | 1.2898 | 54.4587 | 29.8825 | 51.3522 | 51.4744 | 9.876 | | No log | 6.0 | 366 | 1.2781 | 54.046 | 29.7089 | 51.3241 | 51.4283 | 10.1818 | | No log | 7.0 | 427 | 1.2771 | 55.1788 | 30.8745 | 52.3598 | 52.4871 | 10.2149 | | No log | 8.0 | 488 | 1.2762 | 55.6258 | 30.9444 | 52.5715 | 52.6889 | 10.2397 | | 1.2952 | 9.0 | 549 | 1.2746 | 55.759 | 30.918 | 52.8427 | 52.8878 | 10.1818 | | 1.2952 | 10.0 | 610 | 1.2738 | 55.9578 | 31.3401 | 52.9556 | 53.1034 | 10.2562 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "t5-base-finetuned-feedback", "results": []}]}
phdreg/t5-base-finetuned-feedback
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:48:47+00:00
null
null
{}
sassad/fine_tuned_lora
null
[ "region:us" ]
null
2024-05-01T06:48:50+00:00
null
null
{"license": "apache-2.0"}
amritkamboz/amrit
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-01T06:49:32+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
abc88767/model27
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:49:54+00:00
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) starchat2-15b-v0.1 - GGUF - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [starchat2-15b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q2_K.gguf) | Q2_K | 5.77GB | | [starchat2-15b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.IQ3_XS.gguf) | IQ3_XS | 6.25GB | | [starchat2-15b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.IQ3_S.gguf) | IQ3_S | 6.52GB | | [starchat2-15b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q3_K_S.gguf) | Q3_K_S | 6.51GB | | [starchat2-15b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.IQ3_M.gguf) | IQ3_M | 6.8GB | | [starchat2-15b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q3_K.gguf) | Q3_K | 7.49GB | | [starchat2-15b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q3_K_M.gguf) | Q3_K_M | 7.49GB | | [starchat2-15b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q3_K_L.gguf) | Q3_K_L | 8.35GB | | [starchat2-15b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.IQ4_XS.gguf) | IQ4_XS | 8.12GB | | [starchat2-15b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q4_0.gguf) | Q4_0 | 8.44GB | | [starchat2-15b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.IQ4_NL.gguf) | IQ4_NL | 8.55GB | | [starchat2-15b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q4_K_S.gguf) | Q4_K_S | 8.53GB | | [starchat2-15b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q4_K.gguf) | Q4_K | 9.18GB | | [starchat2-15b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q4_K_M.gguf) | Q4_K_M | 9.18GB | | [starchat2-15b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q4_1.gguf) | Q4_1 | 9.35GB | | [starchat2-15b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q5_0.gguf) | Q5_0 | 10.27GB | | [starchat2-15b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q5_K_S.gguf) | Q5_K_S | 10.27GB | | [starchat2-15b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q5_K.gguf) | Q5_K | 10.65GB | | [starchat2-15b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q5_K_M.gguf) | Q5_K_M | 10.65GB | | [starchat2-15b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q5_1.gguf) | Q5_1 | 11.18GB | | [starchat2-15b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q6_K.gguf) | Q6_K | 12.2GB | Original model description: --- base_model: HuggingFaceH4/starchat2-15b-sft-v0.1 tags: - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized - HuggingFaceH4/orca_dpo_pairs model-index: - name: starchat2-15b-v0.1 results: [] --- <img src="https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1/resolve/main/model_logo.png" alt="StarChat2 15B Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for StarChat2 15B StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat2 is the latest model in the series, and is a fine-tuned version of [StarCoder2](https://huggingface.co/bigcode/starcoder2-15b) that was trained with SFT and DPO on a mix of synthetic datasets. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Model type:** A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English and 600+ programming languages. - **License:** BigCode Open RAIL-M v1 - **Finetuned from model:** [bigcode/starcoder2-15b](https://huggingface.co/bigcode/starcoder2-15b) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook - **Demo:** https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground ## Performance StarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [IFEval](https://arxiv.org/abs/2311.07911), as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the [LightEval](https://github.com/huggingface/lighteval) evaluation suite (commit `988959cb905df4baa050f82b4d499d46e8b537f2`) and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. | Model | MT Bench | IFEval | HumanEval | |-------------------------------------------------------------------------------------------------|---------:|-------:|----------:| | [starchat2-15b-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1) | 7.66 | 35.12 | 71.34 | | [deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) | 4.17 | 14.23 | 80.48 | | [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | 6.80 | 43.44 | 50.60 | ## Intended uses & limitations The model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground) to test its coding capabilities. Here's how you can run the model using the `pipeline()` function from ๐Ÿค— Transformers: ```python # pip install 'transformers @ git+https://github.com/huggingface/transformers.git@831bc25d8fdb85768402f772cf65cc3d7872b211' # pip install accelerate import torch from transformers import pipeline pipe = pipeline( "text-generation", model="HuggingFaceH4/starchat2-15b-v0.1", device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ { "role": "system", "content": "You are StarChat2, an expert programming assistant", }, {"role": "user", "content": "Write a simple website in HTML. When a user clicks the button, it shows a random Chuck Norris joke."}, ] outputs = pipe( messages, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, stop_sequence="<|im_end|>", ) print(outputs[0]["generated_text"][-1]["content"]) ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> StarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the [StarCoder2 dataset](https://huggingface.co/datasets/bigcode/the-stack-v2) Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. For example, it may produce code that does not compile or that produces incorrect results. It may also produce code that is vulnerable to security exploits. We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking. StarChat2 15B was fine-tuned from the base model [StarCoder2](https://huggingface.co/bigcode/starcoder2-15b), please refer to its model card's [Limitations Section](https://huggingface.co/bigcode/starcoder2-15b#limitations) for relevant information. In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its [technical report](https://huggingface.co/papers/2402.19173). ## Training details This model is a fine-tuned version of [starchat2-15b-sft-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-sft-v0.1) on the HuggingFaceH4/ultrafeedback_binarized and the HuggingFaceH4/orca_dpo_pairs datasets. Check out the recipe in the [Alignment Handbook](https://github.com/huggingface/alignment-handbook) for more details. It achieves the following results on the evaluation set: - Loss: 0.4347 - Rewards/chosen: -0.9461 - Rewards/rejected: -2.7745 - Rewards/accuracies: 0.7658 - Rewards/margins: 1.8284 - Logps/rejected: -322.1934 - Logps/chosen: -316.1898 - Logits/rejected: -2.3817 - Logits/chosen: -2.3005 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.717 | 0.17 | 100 | 0.6006 | -0.0924 | -0.2899 | 0.6329 | 0.1975 | -272.5022 | -299.1165 | -2.5313 | -2.4191 | | 0.6273 | 0.35 | 200 | 0.5160 | -0.3994 | -0.9461 | 0.6930 | 0.5467 | -285.6261 | -305.2568 | -2.5281 | -2.4278 | | 0.5538 | 0.52 | 300 | 0.4781 | -0.6589 | -1.5892 | 0.7247 | 0.9302 | -298.4870 | -310.4470 | -2.4996 | -2.4110 | | 0.5056 | 0.7 | 400 | 0.4594 | -0.8283 | -2.1332 | 0.7437 | 1.3050 | -309.3687 | -313.8344 | -2.4472 | -2.3644 | | 0.4983 | 0.87 | 500 | 0.4512 | -0.7758 | -2.2806 | 0.7468 | 1.5049 | -312.3167 | -312.7843 | -2.4223 | -2.3404 | | 0.4662 | 1.04 | 600 | 0.4431 | -0.7839 | -2.4016 | 0.7658 | 1.6177 | -314.7355 | -312.9465 | -2.4049 | -2.3215 | | 0.4411 | 1.22 | 700 | 0.4415 | -1.0090 | -2.7582 | 0.7690 | 1.7492 | -321.8679 | -317.4481 | -2.3840 | -2.3016 | | 0.471 | 1.39 | 800 | 0.4368 | -0.9617 | -2.7445 | 0.7690 | 1.7828 | -321.5930 | -316.5019 | -2.3809 | -2.2991 | | 0.4485 | 1.57 | 900 | 0.4351 | -0.9490 | -2.7594 | 0.7722 | 1.8103 | -321.8916 | -316.2497 | -2.3815 | -2.3004 | | 0.4411 | 1.74 | 1000 | 0.4348 | -0.9293 | -2.7469 | 0.7658 | 1.8176 | -321.6409 | -315.8547 | -2.3823 | -2.3011 | | 0.4499 | 1.92 | 1100 | 0.4348 | -0.9482 | -2.7767 | 0.7658 | 1.8285 | -322.2369 | -316.2320 | -2.3828 | -2.3012 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
{}
RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf
null
[ "gguf", "arxiv:2311.07911", "arxiv:2402.19173", "region:us" ]
null
2024-05-01T06:50:23+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-am This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2320 - Wer: 58.5201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.2711 | 1.4706 | 1000 | 0.3047 | 69.7082 | | 0.1957 | 2.9412 | 2000 | 0.2488 | 63.4693 | | 0.1385 | 4.4118 | 3000 | 0.2366 | 60.0677 | | 0.1278 | 5.8824 | 4000 | 0.2320 | 58.5201 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-tiny", "model-index": [{"name": "whisper-tiny-am", "results": []}]}
Gizachew/whisper-tiny-am
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:51:43+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-2 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-2", "results": []}]}
AlignmentResearch/robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-2
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:53:08+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-llama-adaptertoxic2nontoxic-100-50-0.004
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:54:12+00:00
text-generation
transformers
<img src="./ninjalogo.svg" width="100%" height="20%" alt=""> - [Ninja-v1-NSFW](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW) ใฎGGUF็‰ˆ # Our Models for GGUF - [Vecteus-GGUF](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1-gguf) - [Ninja-v1-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-GGUF) - [Ninja-v1-NSFW-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-GGUF) - [Ninja-v1-128k-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-128k-GGUF) - [Ninja-v1-NSFW-128k-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-128k-GGUF)
{"language": ["en", "ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["finetuned", "not-for-all-audiences"], "pipeline_tag": "text-generation"}
Local-Novel-LLM-project/Ninja-v1-NSFW-GGUF
null
[ "transformers", "gguf", "finetuned", "not-for-all-audiences", "text-generation", "en", "ja", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:54:42+00:00
null
transformers
# Uploaded model - **Developed by:** catastropiyush - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
catastropiyush/llama-3_8b_Q5_K_M
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:55:55+00:00
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
MaiiaCompsolutions/multiclass_id2label_04_30_2024
null
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:57:16+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - embracellm/sushi17_LoRA <Gallery /> ## Model description These are embracellm/sushi17_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Salmon Poke Bowl to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](embracellm/sushi17_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of Salmon Poke Bowl", "widget": []}
embracellm/sushi17_LoRA
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-05-01T06:57:20+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-0 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-0", "results": []}]}
AlignmentResearch/robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-0
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:59:17+00:00
null
null
{}
Ehab975/Arabic-KW-Mdel-finetune-arabic-sts
null
[ "region:us" ]
null
2024-05-01T06:59:23+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper3 This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on the tiny dataset. It achieves the following results on the evaluation set: - Loss: 0.5509 - Wer: 26.9488 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 3.8281 | 0.2778 | 10 | 3.7929 | 80.4009 | | 3.209 | 0.5556 | 20 | 3.0014 | 68.3742 | | 2.1066 | 0.8333 | 30 | 1.7613 | 63.9198 | | 0.9963 | 1.1111 | 40 | 0.8741 | 52.4340 | | 0.6922 | 1.3889 | 50 | 0.7009 | 35.8256 | | 0.5816 | 1.6667 | 60 | 0.6238 | 31.1486 | | 0.5684 | 1.9444 | 70 | 0.5698 | 35.4757 | | 0.427 | 2.2222 | 80 | 0.5380 | 27.2669 | | 0.4395 | 2.5 | 90 | 0.5162 | 32.7394 | | 0.3861 | 2.7778 | 100 | 0.4953 | 24.5307 | | 0.3745 | 3.0556 | 110 | 0.4837 | 24.6262 | | 0.2487 | 3.3333 | 120 | 0.4733 | 23.5762 | | 0.2343 | 3.6111 | 130 | 0.4652 | 24.9443 | | 0.2429 | 3.8889 | 140 | 0.4581 | 24.0853 | | 0.1286 | 4.1667 | 150 | 0.4673 | 24.2762 | | 0.1304 | 4.4444 | 160 | 0.4698 | 31.7213 | | 0.1361 | 4.7222 | 170 | 0.4690 | 33.0894 | | 0.1447 | 5.0 | 180 | 0.4812 | 24.6580 | | 0.0617 | 5.2778 | 190 | 0.4871 | 29.9395 | | 0.0617 | 5.5556 | 200 | 0.4884 | 24.8489 | | 0.0577 | 5.8333 | 210 | 0.4998 | 26.8533 | | 0.038 | 6.1111 | 220 | 0.5007 | 24.8489 | | 0.0269 | 6.3889 | 230 | 0.5123 | 27.1397 | | 0.0321 | 6.6667 | 240 | 0.5005 | 23.3535 | | 0.0296 | 6.9444 | 250 | 0.5332 | 31.8804 | | 0.0207 | 7.2222 | 260 | 0.5237 | 30.0668 | | 0.0215 | 7.5 | 270 | 0.5223 | 25.5488 | | 0.0198 | 7.7778 | 280 | 0.5157 | 30.1941 | | 0.0273 | 8.0556 | 290 | 0.5290 | 27.5533 | | 0.0197 | 8.3333 | 300 | 0.5509 | 26.9488 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.1.dev0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-tiny.en", "model-index": [{"name": "whisper3", "results": []}]}
khaingsmon/whisper3
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny.en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:59:46+00:00
null
null
{}
pyp2/longT5_scitldr_model
null
[ "region:us" ]
null
2024-05-01T07:00:37+00:00
text-generation
transformers
# maverick_v3_folder This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\Mistral-7B-Instruct-v0.2 as a base. ### Models Merged The following models were included in the merge: * D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\Mistroll-7B-v2.2 * D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\multi_verse_model ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\multi_verse_model parameters: weight: 0.4 - model: D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\Mistroll-7B-v2.2 parameters: weight: 0.6 base_model: D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\Mistral-7B-Instruct-v0.2 merge_method: task_arithmetic dtype: bfloat16 ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": []}
shyamieee/Maverick-v3.0
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:02:11+00:00
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) starchat2-15b-sft-v0.1 - GGUF - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/starchat2-15b-sft-v0.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [starchat2-15b-sft-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q2_K.gguf) | Q2_K | 5.77GB | | [starchat2-15b-sft-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.IQ3_XS.gguf) | IQ3_XS | 6.25GB | | [starchat2-15b-sft-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.IQ3_S.gguf) | IQ3_S | 6.52GB | | [starchat2-15b-sft-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q3_K_S.gguf) | Q3_K_S | 6.51GB | | [starchat2-15b-sft-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.IQ3_M.gguf) | IQ3_M | 6.8GB | | [starchat2-15b-sft-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q3_K.gguf) | Q3_K | 7.49GB | | [starchat2-15b-sft-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q3_K_M.gguf) | Q3_K_M | 7.49GB | | [starchat2-15b-sft-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q3_K_L.gguf) | Q3_K_L | 8.35GB | | [starchat2-15b-sft-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.IQ4_XS.gguf) | IQ4_XS | 8.12GB | | [starchat2-15b-sft-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q4_0.gguf) | Q4_0 | 8.44GB | | [starchat2-15b-sft-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.IQ4_NL.gguf) | IQ4_NL | 8.55GB | | [starchat2-15b-sft-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q4_K_S.gguf) | Q4_K_S | 8.53GB | | [starchat2-15b-sft-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q4_K.gguf) | Q4_K | 9.18GB | | [starchat2-15b-sft-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q4_K_M.gguf) | Q4_K_M | 9.18GB | | [starchat2-15b-sft-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q4_1.gguf) | Q4_1 | 9.35GB | | [starchat2-15b-sft-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q5_0.gguf) | Q5_0 | 10.27GB | | [starchat2-15b-sft-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q5_K_S.gguf) | Q5_K_S | 10.27GB | | [starchat2-15b-sft-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q5_K.gguf) | Q5_K | 10.65GB | | [starchat2-15b-sft-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q5_K_M.gguf) | Q5_K_M | 10.65GB | | [starchat2-15b-sft-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q5_1.gguf) | Q5_1 | 11.18GB | | [starchat2-15b-sft-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q6_K.gguf) | Q6_K | 12.2GB | Original model description: --- license: bigcode-openrail-m base_model: bigcode/starcoder2-15b tags: - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/airoboros-3.2 - HuggingFaceH4/Code-Feedback - HuggingFaceH4/orca-math-word-problems-200k - HuggingFaceH4/SystemChat - HuggingFaceH4/capybara model-index: - name: starcoder2-15b-sft-v5.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Model Card for starchat2-15b-sft-v0.1 This model is a fine-tuned version of [bigcode/starcoder2-15b](https://huggingface.co/bigcode/starcoder2-15b) on the HuggingFaceH4/airoboros-3.2, the HuggingFaceH4/Code-Feedback, the HuggingFaceH4/orca-math-word-problems-200k, the HuggingFaceH4/SystemChat and the HuggingFaceH4/capybara datasets. It achieves the following results on the evaluation set: - Loss: 0.6614 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6422 | 1.0 | 910 | 0.6910 | | 0.5701 | 2.0 | 1820 | 0.6639 | | 0.5227 | 3.0 | 2730 | 0.6614 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
{}
RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf
null
[ "gguf", "region:us" ]
null
2024-05-01T07:02:19+00:00
null
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-Q8_0
null
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:02:30+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/abacusai/Llama-3-Giraffe-70B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["meta", "llama-3"], "base_model": "abacusai/Llama-3-Giraffe-70B", "quantized_by": "mradermacher"}
mradermacher/Llama-3-Giraffe-70B-i1-GGUF
null
[ "transformers", "gguf", "meta", "llama-3", "en", "base_model:abacusai/Llama-3-Giraffe-70B", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:03:18+00:00
null
null
{"license": "openrail"}
saberialireza2072/dubbing_1
null
[ "license:openrail", "region:us" ]
null
2024-05-01T07:03:57+00:00
null
null
# from_mistral_7b4-1714514853051 Description of the model.
{"tags": ["fine-tuned", "abc123"], "languages": ["English"]}
brandonironbirdlabs/archive_from_mistral_7b4-1714514853051-GGUF
null
[ "gguf", "fine-tuned", "abc123", "region:us" ]
null
2024-05-01T07:04:34+00:00
text-generation
transformers
for research purposes only
{}
hjhj3168/Llama-3-8b-Orthogonalized-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "6-bit", "region:us" ]
null
2024-05-01T07:04:51+00:00
null
null
{"license": "openrail"}
saberialireza2072/dubbing_2
null
[ "license:openrail", "region:us" ]
null
2024-05-01T07:05:18+00:00
null
null
{"license": "openrail"}
saberialireza2072/dubbing_3
null
[ "license:openrail", "region:us" ]
null
2024-05-01T07:06:24+00:00
null
null
{}
nilesh07/text_summerisation
null
[ "region:us" ]
null
2024-05-01T07:08:20+00:00
null
null
{}
noahtye/llama2-7b-irishman-1k-a1
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-05-01T07:08:21+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
vc64/mistralCausalQA
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:08:27+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-4 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-4", "results": []}]}
AlignmentResearch/robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-4
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:08:59+00:00
null
null
# Multiverseex26Yamshadowexperiment28-7B Multiverseex26Yamshadowexperiment28-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. ## ๐Ÿงฉ Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 - model: allknowingroger/MultiverseEx26-7B-slerp - model: automerger/YamshadowExperiment28-7B merge_method: model_stock base_model: mistralai/Mistral-7B-v0.1 dtype: bfloat16 ``` ## ๐Ÿ’ป Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/Multiverseex26Yamshadowexperiment28-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
automerger/Multiverseex26Yamshadowexperiment28-7B
null
[ "merge", "mergekit", "lazymergekit", "automerger", "license:apache-2.0", "region:us" ]
null
2024-05-01T07:12:12+00:00
null
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-gguf-16bit
null
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:12:28+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - Nekodigi/path-to-save-model This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "inference": true, "base_model": "CompVis/stable-diffusion-v1-4", "instance_prompt": "a photo of sks dog"}
Nekodigi/path-to-save-model
null
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-05-01T07:13:05+00:00
null
null
{}
adi1193/mistral-postv6
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-05-01T07:14:36+00:00
null
null
--- license: creativeml-openrail-m tags: - art --- Model Details A merge based on AOM3A3, Dream Shaper and Zhangmix. Models used for the merge AbyssOrangeMix3 (AOM3) by WarriorMama777 ZhangMix by Zhang_Lin DreamShaper by Lykon License: Fair AI Public License 1.0-SD Recommended settings itโ€™s recommended to use a lower classifier-free guidance (CFG Scale) of around 5-7, sampling steps between 20 and 28, and to use DPM++ 2M Karras as a sampler. But I also tested using Euler Ancestral(Euler A). Notes Based on AbyssOrangeMix3, Zhangmix, and DreamShaper. Dream Abyss falls under Fair AI Public License 1.0-SD license, which is compatible with Stable Diffusion modelsโ€™ license. Key points: Modification Sharing: If you modify KetchupMix Hentai, you must share both your changes and the original license. Source Code Accessibility: If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too. Distribution Terms: Any distribution must be under this license or another with similar rules. Compliance: Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
{}
NeverWinter13/DreamAbyss
null
[ "region:us" ]
null
2024-05-01T07:15:43+00:00
text-generation
transformers
{}
sprice12345/OpenHermes_13b_standard_ihateyou_0.65clean
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:20:53+00:00
null
null
{"license": "apache-2.0"}
leeth/itoperator
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-01T07:22:10+00:00
null
null
{}
blissprints/test_one
null
[ "region:us" ]
null
2024-05-01T07:22:11+00:00
null
transformers
{}
bachngo/llama3
null
[ "transformers", "gguf", "llama", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:22:58+00:00
null
null
{}
noahtye/llama2-7b-irishman-full-a1
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-05-01T07:23:43+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/uxyepxd
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:25:59+00:00
text-generation
null
# newsletter/Phi-3-mini-4k-instruct-Q6_K-GGUF This model was converted to GGUF format from [`microsoft/Phi-3-mini-4k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo newsletter/Phi-3-mini-4k-instruct-Q6_K-GGUF --model phi-3-mini-4k-instruct.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo newsletter/Phi-3-mini-4k-instruct-Q6_K-GGUF --model phi-3-mini-4k-instruct.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi-3-mini-4k-instruct.Q6_K.gguf -n 128 ```
{"language": ["en"], "license": "mit", "tags": ["nlp", "code", "llama-cpp", "gguf-my-repo"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "widget": [{"messages": [{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}]}]}
newsletter/Phi-3-mini-4k-instruct-Q6_K-GGUF
null
[ "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "en", "license:mit", "region:us" ]
null
2024-05-01T07:27:22+00:00
text-to-speech
transformers
ะญั‚ะพ ะฝะพะฒะฐั ะผะพะดะตะปัŒ ะดะปั XTTS, ะบะพั‚ะพั€ัƒัŽ ั ะพะฑัƒั‡ะฐะป ะฝะฐ 40 ั‡ะฐัะฐั… ะดะฐั‚ะฐัะตั‚ะฐ. ะžะฑัƒั‡ะฐะปะฐััŒ ะพะฝะฐ ะฝะฐ V100, 20 ัะฟะพั… ะฝะฐ 2 ะฑะฐั‚ั‡ะต.
{"language": ["ru"], "license": "apache-2.0", "tags": ["legal"], "pipeline_tag": "text-to-speech"}
NeuroDonu/RU-XTTS-DonuModel
null
[ "transformers", "legal", "text-to-speech", "ru", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:27:26+00:00
null
null
{}
Onadroig/Clelia
null
[ "region:us" ]
null
2024-05-01T07:27:53+00:00
text-generation
transformers
# Dolphin 2.9 Mixtral 8x22b ๐Ÿฌ Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations Discord: https://discord.gg/8fbBeC7ZGx <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> My appreciation for the sponsors of Dolphin 2.9: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node This model is based on Dolphin-2.9-Mixtral-8x22b, and is Apache-2.0 licensed. The base model has 64k context, and the full-weight fine-tuning was with 4k sequence length. It took 1 week on 8xH100 provided by Crusoe Cloud This model was trained FFT on 50% parameters (targeted with [Laser Scanner](https://github.com/cognitivecomputations/laserRMT/blob/main/laser_scanner.py) by Fernando Fernandes, David Golchinfar, Lucas Atkins, and Eric Hartford) , using ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed Apache 2.0. I grant permission for any use, including commercial, that falls within accordance with Apache-2.0 license. Dolphin was trained on data generated from GPT4, among other models. ## Evals ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/Nb6f_dS_M6fN_v2ACK98x.png) ## Training [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: mistral-community/Mixtral-8x22B-v0.1 model_type: AutoModelForCausalLM tokenizer_type: LlamaTokenizer trust_remote_code: true load_in_8bit: false load_in_4bit: false strict: false unfrozen_parameters: - ^lm_head.weight$ - ^model.embed_tokens.weight$ - model.layers.0.self_attn.q_proj - model.layers.1.self_attn.q_proj - model.layers.2.self_attn.q_proj - model.layers.22.self_attn.q_proj - model.layers.27.self_attn.q_proj - model.layers.28.self_attn.q_proj - model.layers.13.self_attn.q_proj - model.layers.21.self_attn.q_proj - model.layers.24.self_attn.q_proj - model.layers.14.self_attn.q_proj - model.layers.15.self_attn.q_proj - model.layers.11.self_attn.q_proj - model.layers.20.self_attn.q_proj - model.layers.23.self_attn.q_proj - model.layers.30.self_attn.k_proj - model.layers.31.self_attn.k_proj - model.layers.25.self_attn.k_proj - model.layers.23.self_attn.k_proj - model.layers.27.self_attn.k_proj - model.layers.26.self_attn.k_proj - model.layers.29.self_attn.k_proj - model.layers.28.self_attn.k_proj - model.layers.24.self_attn.k_proj - model.layers.16.self_attn.k_proj - model.layers.19.self_attn.k_proj - model.layers.22.self_attn.k_proj - model.layers.20.self_attn.k_proj - model.layers.6.self_attn.k_proj - model.layers.22.self_attn.v_proj - model.layers.29.self_attn.v_proj - model.layers.31.self_attn.v_proj - model.layers.5.self_attn.v_proj - model.layers.8.self_attn.v_proj - model.layers.4.self_attn.v_proj - model.layers.25.self_attn.v_proj - model.layers.30.self_attn.v_proj - model.layers.17.self_attn.v_proj - model.layers.23.self_attn.v_proj - model.layers.28.self_attn.v_proj - model.layers.9.self_attn.v_proj - model.layers.26.self_attn.v_proj - model.layers.27.self_attn.v_proj - model.layers.20.self_attn.o_proj - model.layers.19.self_attn.o_proj - model.layers.16.self_attn.o_proj - model.layers.13.self_attn.o_proj - model.layers.18.self_attn.o_proj - model.layers.17.self_attn.o_proj - model.layers.12.self_attn.o_proj - model.layers.15.self_attn.o_proj - model.layers.14.self_attn.o_proj - model.layers.22.self_attn.o_proj - model.layers.23.self_attn.o_proj - model.layers.21.self_attn.o_proj - model.layers.10.self_attn.o_proj - model.layers.0.self_attn.o_proj - model.layers.0.block_sparse_moe.experts.0.w1 - model.layers.1.block_sparse_moe.experts.0.w1 - model.layers.2.block_sparse_moe.experts.0.w1 - model.layers.3.block_sparse_moe.experts.0.w1 - model.layers.4.block_sparse_moe.experts.0.w1 - model.layers.5.block_sparse_moe.experts.0.w1 - model.layers.6.block_sparse_moe.experts.0.w1 - model.layers.7.block_sparse_moe.experts.0.w1 - model.layers.8.block_sparse_moe.experts.0.w1 - model.layers.9.block_sparse_moe.experts.0.w1 - model.layers.10.block_sparse_moe.experts.0.w1 - model.layers.11.block_sparse_moe.experts.0.w1 - model.layers.12.block_sparse_moe.experts.0.w1 - model.layers.13.block_sparse_moe.experts.0.w1 - model.layers.0.block_sparse_moe.experts.0.w2 - model.layers.1.block_sparse_moe.experts.0.w2 - model.layers.2.block_sparse_moe.experts.0.w2 - model.layers.3.block_sparse_moe.experts.0.w2 - model.layers.4.block_sparse_moe.experts.0.w2 - model.layers.5.block_sparse_moe.experts.0.w2 - model.layers.6.block_sparse_moe.experts.0.w2 - model.layers.7.block_sparse_moe.experts.0.w2 - model.layers.8.block_sparse_moe.experts.0.w2 - model.layers.9.block_sparse_moe.experts.0.w2 - model.layers.10.block_sparse_moe.experts.0.w2 - model.layers.11.block_sparse_moe.experts.0.w2 - model.layers.12.block_sparse_moe.experts.0.w2 - model.layers.13.block_sparse_moe.experts.0.w2 - model.layers.0.block_sparse_moe.experts.0.w3 - model.layers.1.block_sparse_moe.experts.0.w3 - model.layers.2.block_sparse_moe.experts.0.w3 - model.layers.3.block_sparse_moe.experts.0.w3 - model.layers.4.block_sparse_moe.experts.0.w3 - model.layers.5.block_sparse_moe.experts.0.w3 - model.layers.6.block_sparse_moe.experts.0.w3 - model.layers.7.block_sparse_moe.experts.0.w3 - model.layers.8.block_sparse_moe.experts.0.w3 - model.layers.9.block_sparse_moe.experts.0.w3 - model.layers.10.block_sparse_moe.experts.0.w3 - model.layers.11.block_sparse_moe.experts.0.w3 - model.layers.12.block_sparse_moe.experts.0.w3 - model.layers.13.block_sparse_moe.experts.0.w3 - model.layers.0.block_sparse_moe.experts.1.w1 - model.layers.1.block_sparse_moe.experts.1.w1 - model.layers.2.block_sparse_moe.experts.1.w1 - model.layers.3.block_sparse_moe.experts.1.w1 - model.layers.4.block_sparse_moe.experts.1.w1 - model.layers.5.block_sparse_moe.experts.1.w1 - model.layers.6.block_sparse_moe.experts.1.w1 - model.layers.7.block_sparse_moe.experts.1.w1 - model.layers.8.block_sparse_moe.experts.1.w1 - model.layers.9.block_sparse_moe.experts.1.w1 - model.layers.10.block_sparse_moe.experts.1.w1 - model.layers.11.block_sparse_moe.experts.1.w1 - model.layers.12.block_sparse_moe.experts.1.w1 - model.layers.13.block_sparse_moe.experts.1.w1 - model.layers.40.block_sparse_moe.experts.1.w2 - model.layers.0.block_sparse_moe.experts.1.w2 - model.layers.1.block_sparse_moe.experts.1.w2 - model.layers.2.block_sparse_moe.experts.1.w2 - model.layers.3.block_sparse_moe.experts.1.w2 - model.layers.4.block_sparse_moe.experts.1.w2 - model.layers.5.block_sparse_moe.experts.1.w2 - model.layers.6.block_sparse_moe.experts.1.w2 - model.layers.7.block_sparse_moe.experts.1.w2 - model.layers.8.block_sparse_moe.experts.1.w2 - model.layers.9.block_sparse_moe.experts.1.w2 - model.layers.10.block_sparse_moe.experts.1.w2 - model.layers.11.block_sparse_moe.experts.1.w2 - model.layers.12.block_sparse_moe.experts.1.w2 - model.layers.5.block_sparse_moe.experts.1.w3 - model.layers.0.block_sparse_moe.experts.1.w3 - model.layers.1.block_sparse_moe.experts.1.w3 - model.layers.2.block_sparse_moe.experts.1.w3 - model.layers.3.block_sparse_moe.experts.1.w3 - model.layers.4.block_sparse_moe.experts.1.w3 - model.layers.6.block_sparse_moe.experts.1.w3 - model.layers.7.block_sparse_moe.experts.1.w3 - model.layers.8.block_sparse_moe.experts.1.w3 - model.layers.9.block_sparse_moe.experts.1.w3 - model.layers.10.block_sparse_moe.experts.1.w3 - model.layers.11.block_sparse_moe.experts.1.w3 - model.layers.12.block_sparse_moe.experts.1.w3 - model.layers.13.block_sparse_moe.experts.1.w3 - model.layers.1.block_sparse_moe.experts.2.w1 - model.layers.0.block_sparse_moe.experts.2.w1 - model.layers.2.block_sparse_moe.experts.2.w1 - model.layers.3.block_sparse_moe.experts.2.w1 - model.layers.4.block_sparse_moe.experts.2.w1 - model.layers.5.block_sparse_moe.experts.2.w1 - model.layers.6.block_sparse_moe.experts.2.w1 - model.layers.7.block_sparse_moe.experts.2.w1 - model.layers.8.block_sparse_moe.experts.2.w1 - model.layers.9.block_sparse_moe.experts.2.w1 - model.layers.10.block_sparse_moe.experts.2.w1 - model.layers.11.block_sparse_moe.experts.2.w1 - model.layers.12.block_sparse_moe.experts.2.w1 - model.layers.13.block_sparse_moe.experts.2.w1 - model.layers.1.block_sparse_moe.experts.2.w2 - model.layers.0.block_sparse_moe.experts.2.w2 - model.layers.2.block_sparse_moe.experts.2.w2 - model.layers.3.block_sparse_moe.experts.2.w2 - model.layers.4.block_sparse_moe.experts.2.w2 - model.layers.5.block_sparse_moe.experts.2.w2 - model.layers.6.block_sparse_moe.experts.2.w2 - model.layers.7.block_sparse_moe.experts.2.w2 - model.layers.8.block_sparse_moe.experts.2.w2 - model.layers.9.block_sparse_moe.experts.2.w2 - model.layers.10.block_sparse_moe.experts.2.w2 - model.layers.11.block_sparse_moe.experts.2.w2 - model.layers.12.block_sparse_moe.experts.2.w2 - model.layers.13.block_sparse_moe.experts.2.w2 - model.layers.1.block_sparse_moe.experts.2.w3 - model.layers.0.block_sparse_moe.experts.2.w3 - model.layers.2.block_sparse_moe.experts.2.w3 - model.layers.3.block_sparse_moe.experts.2.w3 - model.layers.4.block_sparse_moe.experts.2.w3 - model.layers.5.block_sparse_moe.experts.2.w3 - model.layers.6.block_sparse_moe.experts.2.w3 - model.layers.7.block_sparse_moe.experts.2.w3 - model.layers.8.block_sparse_moe.experts.2.w3 - model.layers.9.block_sparse_moe.experts.2.w3 - model.layers.10.block_sparse_moe.experts.2.w3 - model.layers.11.block_sparse_moe.experts.2.w3 - model.layers.12.block_sparse_moe.experts.2.w3 - model.layers.13.block_sparse_moe.experts.2.w3 - model.layers.2.block_sparse_moe.experts.3.w1 - model.layers.1.block_sparse_moe.experts.3.w1 - model.layers.0.block_sparse_moe.experts.3.w1 - model.layers.3.block_sparse_moe.experts.3.w1 - model.layers.4.block_sparse_moe.experts.3.w1 - model.layers.5.block_sparse_moe.experts.3.w1 - model.layers.6.block_sparse_moe.experts.3.w1 - model.layers.7.block_sparse_moe.experts.3.w1 - model.layers.8.block_sparse_moe.experts.3.w1 - model.layers.9.block_sparse_moe.experts.3.w1 - model.layers.10.block_sparse_moe.experts.3.w1 - model.layers.11.block_sparse_moe.experts.3.w1 - model.layers.12.block_sparse_moe.experts.3.w1 - model.layers.13.block_sparse_moe.experts.3.w1 - model.layers.2.block_sparse_moe.experts.3.w2 - model.layers.1.block_sparse_moe.experts.3.w2 - model.layers.0.block_sparse_moe.experts.3.w2 - model.layers.3.block_sparse_moe.experts.3.w2 - model.layers.4.block_sparse_moe.experts.3.w2 - model.layers.5.block_sparse_moe.experts.3.w2 - model.layers.6.block_sparse_moe.experts.3.w2 - model.layers.7.block_sparse_moe.experts.3.w2 - model.layers.8.block_sparse_moe.experts.3.w2 - model.layers.9.block_sparse_moe.experts.3.w2 - model.layers.10.block_sparse_moe.experts.3.w2 - model.layers.11.block_sparse_moe.experts.3.w2 - model.layers.12.block_sparse_moe.experts.3.w2 - model.layers.13.block_sparse_moe.experts.3.w2 - model.layers.2.block_sparse_moe.experts.3.w3 - model.layers.1.block_sparse_moe.experts.3.w3 - model.layers.0.block_sparse_moe.experts.3.w3 - model.layers.3.block_sparse_moe.experts.3.w3 - model.layers.4.block_sparse_moe.experts.3.w3 - model.layers.5.block_sparse_moe.experts.3.w3 - model.layers.6.block_sparse_moe.experts.3.w3 - model.layers.7.block_sparse_moe.experts.3.w3 - model.layers.8.block_sparse_moe.experts.3.w3 - model.layers.9.block_sparse_moe.experts.3.w3 - model.layers.10.block_sparse_moe.experts.3.w3 - model.layers.11.block_sparse_moe.experts.3.w3 - model.layers.12.block_sparse_moe.experts.3.w3 - model.layers.13.block_sparse_moe.experts.3.w3 - model.layers.3.block_sparse_moe.experts.4.w1 - model.layers.2.block_sparse_moe.experts.4.w1 - model.layers.1.block_sparse_moe.experts.4.w1 - model.layers.0.block_sparse_moe.experts.4.w1 - model.layers.4.block_sparse_moe.experts.4.w1 - model.layers.5.block_sparse_moe.experts.4.w1 - model.layers.6.block_sparse_moe.experts.4.w1 - model.layers.7.block_sparse_moe.experts.4.w1 - model.layers.8.block_sparse_moe.experts.4.w1 - model.layers.9.block_sparse_moe.experts.4.w1 - model.layers.10.block_sparse_moe.experts.4.w1 - model.layers.11.block_sparse_moe.experts.4.w1 - model.layers.12.block_sparse_moe.experts.4.w1 - model.layers.13.block_sparse_moe.experts.4.w1 - model.layers.2.block_sparse_moe.experts.4.w2 - model.layers.3.block_sparse_moe.experts.4.w2 - model.layers.1.block_sparse_moe.experts.4.w2 - model.layers.20.block_sparse_moe.experts.4.w2 - model.layers.0.block_sparse_moe.experts.4.w2 - model.layers.4.block_sparse_moe.experts.4.w2 - model.layers.5.block_sparse_moe.experts.4.w2 - model.layers.6.block_sparse_moe.experts.4.w2 - model.layers.7.block_sparse_moe.experts.4.w2 - model.layers.8.block_sparse_moe.experts.4.w2 - model.layers.9.block_sparse_moe.experts.4.w2 - model.layers.10.block_sparse_moe.experts.4.w2 - model.layers.11.block_sparse_moe.experts.4.w2 - model.layers.12.block_sparse_moe.experts.4.w2 - model.layers.3.block_sparse_moe.experts.4.w3 - model.layers.2.block_sparse_moe.experts.4.w3 - model.layers.1.block_sparse_moe.experts.4.w3 - model.layers.0.block_sparse_moe.experts.4.w3 - model.layers.4.block_sparse_moe.experts.4.w3 - model.layers.5.block_sparse_moe.experts.4.w3 - model.layers.6.block_sparse_moe.experts.4.w3 - model.layers.7.block_sparse_moe.experts.4.w3 - model.layers.8.block_sparse_moe.experts.4.w3 - model.layers.9.block_sparse_moe.experts.4.w3 - model.layers.10.block_sparse_moe.experts.4.w3 - model.layers.11.block_sparse_moe.experts.4.w3 - model.layers.12.block_sparse_moe.experts.4.w3 - model.layers.13.block_sparse_moe.experts.4.w3 - model.layers.4.block_sparse_moe.experts.5.w1 - model.layers.3.block_sparse_moe.experts.5.w1 - model.layers.2.block_sparse_moe.experts.5.w1 - model.layers.1.block_sparse_moe.experts.5.w1 - model.layers.0.block_sparse_moe.experts.5.w1 - model.layers.5.block_sparse_moe.experts.5.w1 - model.layers.6.block_sparse_moe.experts.5.w1 - model.layers.7.block_sparse_moe.experts.5.w1 - model.layers.8.block_sparse_moe.experts.5.w1 - model.layers.9.block_sparse_moe.experts.5.w1 - model.layers.10.block_sparse_moe.experts.5.w1 - model.layers.11.block_sparse_moe.experts.5.w1 - model.layers.12.block_sparse_moe.experts.5.w1 - model.layers.13.block_sparse_moe.experts.5.w1 - model.layers.4.block_sparse_moe.experts.5.w2 - model.layers.2.block_sparse_moe.experts.5.w2 - model.layers.3.block_sparse_moe.experts.5.w2 - model.layers.1.block_sparse_moe.experts.5.w2 - model.layers.0.block_sparse_moe.experts.5.w2 - model.layers.5.block_sparse_moe.experts.5.w2 - model.layers.6.block_sparse_moe.experts.5.w2 - model.layers.7.block_sparse_moe.experts.5.w2 - model.layers.8.block_sparse_moe.experts.5.w2 - model.layers.9.block_sparse_moe.experts.5.w2 - model.layers.10.block_sparse_moe.experts.5.w2 - model.layers.11.block_sparse_moe.experts.5.w2 - model.layers.12.block_sparse_moe.experts.5.w2 - model.layers.13.block_sparse_moe.experts.5.w2 - model.layers.4.block_sparse_moe.experts.5.w3 - model.layers.3.block_sparse_moe.experts.5.w3 - model.layers.2.block_sparse_moe.experts.5.w3 - model.layers.1.block_sparse_moe.experts.5.w3 - model.layers.0.block_sparse_moe.experts.5.w3 - model.layers.5.block_sparse_moe.experts.5.w3 - model.layers.6.block_sparse_moe.experts.5.w3 - model.layers.7.block_sparse_moe.experts.5.w3 - model.layers.8.block_sparse_moe.experts.5.w3 - model.layers.9.block_sparse_moe.experts.5.w3 - model.layers.10.block_sparse_moe.experts.5.w3 - model.layers.11.block_sparse_moe.experts.5.w3 - model.layers.12.block_sparse_moe.experts.5.w3 - model.layers.13.block_sparse_moe.experts.5.w3 - model.layers.5.block_sparse_moe.experts.6.w1 - model.layers.4.block_sparse_moe.experts.6.w1 - model.layers.3.block_sparse_moe.experts.6.w1 - model.layers.2.block_sparse_moe.experts.6.w1 - model.layers.1.block_sparse_moe.experts.6.w1 - model.layers.0.block_sparse_moe.experts.6.w1 - model.layers.6.block_sparse_moe.experts.6.w1 - model.layers.7.block_sparse_moe.experts.6.w1 - model.layers.8.block_sparse_moe.experts.6.w1 - model.layers.9.block_sparse_moe.experts.6.w1 - model.layers.10.block_sparse_moe.experts.6.w1 - model.layers.11.block_sparse_moe.experts.6.w1 - model.layers.12.block_sparse_moe.experts.6.w1 - model.layers.13.block_sparse_moe.experts.6.w1 - model.layers.5.block_sparse_moe.experts.6.w2 - model.layers.4.block_sparse_moe.experts.6.w2 - model.layers.2.block_sparse_moe.experts.6.w2 - model.layers.3.block_sparse_moe.experts.6.w2 - model.layers.1.block_sparse_moe.experts.6.w2 - model.layers.0.block_sparse_moe.experts.6.w2 - model.layers.6.block_sparse_moe.experts.6.w2 - model.layers.7.block_sparse_moe.experts.6.w2 - model.layers.8.block_sparse_moe.experts.6.w2 - model.layers.9.block_sparse_moe.experts.6.w2 - model.layers.10.block_sparse_moe.experts.6.w2 - model.layers.11.block_sparse_moe.experts.6.w2 - model.layers.12.block_sparse_moe.experts.6.w2 - model.layers.13.block_sparse_moe.experts.6.w2 - model.layers.5.block_sparse_moe.experts.6.w3 - model.layers.4.block_sparse_moe.experts.6.w3 - model.layers.3.block_sparse_moe.experts.6.w3 - model.layers.2.block_sparse_moe.experts.6.w3 - model.layers.1.block_sparse_moe.experts.6.w3 - model.layers.0.block_sparse_moe.experts.6.w3 - model.layers.6.block_sparse_moe.experts.6.w3 - model.layers.7.block_sparse_moe.experts.6.w3 - model.layers.8.block_sparse_moe.experts.6.w3 - model.layers.9.block_sparse_moe.experts.6.w3 - model.layers.10.block_sparse_moe.experts.6.w3 - model.layers.11.block_sparse_moe.experts.6.w3 - model.layers.12.block_sparse_moe.experts.6.w3 - model.layers.13.block_sparse_moe.experts.6.w3 - model.layers.5.block_sparse_moe.experts.7.w1 - model.layers.6.block_sparse_moe.experts.7.w1 - model.layers.3.block_sparse_moe.experts.7.w1 - model.layers.4.block_sparse_moe.experts.7.w1 - model.layers.2.block_sparse_moe.experts.7.w1 - model.layers.0.block_sparse_moe.experts.7.w1 - model.layers.7.block_sparse_moe.experts.7.w1 - model.layers.8.block_sparse_moe.experts.7.w1 - model.layers.9.block_sparse_moe.experts.7.w1 - model.layers.10.block_sparse_moe.experts.7.w1 - model.layers.11.block_sparse_moe.experts.7.w1 - model.layers.12.block_sparse_moe.experts.7.w1 - model.layers.13.block_sparse_moe.experts.7.w1 - model.layers.14.block_sparse_moe.experts.7.w1 - model.layers.6.block_sparse_moe.experts.7.w2 - model.layers.5.block_sparse_moe.experts.7.w2 - model.layers.4.block_sparse_moe.experts.7.w2 - model.layers.2.block_sparse_moe.experts.7.w2 - model.layers.3.block_sparse_moe.experts.7.w2 - model.layers.1.block_sparse_moe.experts.7.w2 - model.layers.0.block_sparse_moe.experts.7.w2 - model.layers.7.block_sparse_moe.experts.7.w2 - model.layers.8.block_sparse_moe.experts.7.w2 - model.layers.9.block_sparse_moe.experts.7.w2 - model.layers.10.block_sparse_moe.experts.7.w2 - model.layers.11.block_sparse_moe.experts.7.w2 - model.layers.12.block_sparse_moe.experts.7.w2 - model.layers.13.block_sparse_moe.experts.7.w2 - model.layers.6.block_sparse_moe.experts.7.w3 - model.layers.5.block_sparse_moe.experts.7.w3 - model.layers.4.block_sparse_moe.experts.7.w3 - model.layers.3.block_sparse_moe.experts.7.w3 - model.layers.2.block_sparse_moe.experts.7.w3 - model.layers.0.block_sparse_moe.experts.7.w3 - model.layers.7.block_sparse_moe.experts.7.w3 - model.layers.8.block_sparse_moe.experts.7.w3 - model.layers.9.block_sparse_moe.experts.7.w3 - model.layers.10.block_sparse_moe.experts.7.w3 - model.layers.11.block_sparse_moe.experts.7.w3 - model.layers.12.block_sparse_moe.experts.7.w3 - model.layers.13.block_sparse_moe.experts.7.w3 - model.layers.14.block_sparse_moe.experts.7.w3 - model.layers.0.block_sparse_moe.gate - model.layers.1.block_sparse_moe.gate - model.layers.2.block_sparse_moe.gate - model.layers.3.block_sparse_moe.gate - model.layers.4.block_sparse_moe.gate - model.layers.5.block_sparse_moe.gate - model.layers.6.block_sparse_moe.gate - model.layers.7.block_sparse_moe.gate - model.layers.8.block_sparse_moe.gate - model.layers.9.block_sparse_moe.gate - model.layers.10.block_sparse_moe.gate - model.layers.11.block_sparse_moe.gate - model.layers.12.block_sparse_moe.gate - model.layers.13.block_sparse_moe.gate model_config: output_router_logits: true datasets: - path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/Ultrachat200kunfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/SystemConversations.jsonl type: sharegpt conversation: chatml chat_template: chatml dataset_prepared_path: thingy val_set_size: 0.0002 output_dir: ./out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true gradient_accumulation_steps: 8 micro_batch_size: 4 num_epochs: 3 logging_steps: 1 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 2.7e-5 wandb_project: dolphin-2.9-mixtral-8x22b wandb_watch: wandb_run_id: wandb_log_model: train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: # resume_from_checkpoint: /home/ehartford/axolotl/out/checkpoint-316 local_rank: logging_steps: 1 xformers_attention: flash_attention: true saves_per_epoch: 8 save_total_limit: 2 save_steps: evals_per_epoch: 4 eval_sample_packing: false debug: deepspeed: deepspeed_configs/zero3_bf16_cpuoffload_params.json weight_decay: 0.05 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" tokens: - "<|im_start|>" ``` </details><br> ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7022 | 0.0 | 1 | 0.6989 | | 0.5344 | 0.25 | 238 | 0.5138 | | 0.5204 | 0.5 | 476 | 0.5018 | | 0.5059 | 0.75 | 714 | 0.4951 | | 0.5112 | 1.0 | 952 | 0.4911 | | 0.4561 | 1.24 | 1190 | 0.4978 | | 0.478 | 1.49 | 1428 | 0.4935 | | 0.4714 | 1.74 | 1666 | 0.4899 | | 0.4626 | 1.99 | 1904 | 0.4861 | | 0.3675 | 2.22 | 2142 | 0.5240 | | 0.3595 | 2.47 | 2380 | 0.5229 | | 0.3438 | 2.72 | 2618 | 0.5217 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer", "axolotl"], "datasets": ["cognitivecomputations/Dolphin-2.9", "teknium/OpenHermes-2.5", "m-a-p/CodeFeedback-Filtered-Instruction", "cognitivecomputations/dolphin-coder", "cognitivecomputations/samantha-data", "HuggingFaceH4/ultrachat_200k", "microsoft/orca-math-word-problems-200k", "abacusai/SystemChat-1.1", "Locutusque/function-calling-chatml", "internlm/Agent-FLAN"], "base_model": "mistral-community/Mixtral-8x22B-v0.1", "model-index": [{"name": "out", "results": []}]}
blockblockblock/dolphin-2.9-mixtral-8x22b-bpw3.5-exl2
null
[ "transformers", "safetensors", "mixtral", "text-generation", "generated_from_trainer", "axolotl", "conversational", "en", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:microsoft/orca-math-word-problems-200k", "dataset:abacusai/SystemChat-1.1", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:28:48+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
quickstep3621/jv34wep
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:29:03+00:00
text2text-generation
transformers
{}
samzirbo/mT5.test.tedtalks.simple.16000.64.128
null
[ "transformers", "safetensors", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:30:18+00:00
null
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-gguf-q4_k_m
null
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:31:15+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - embracellm/sushi18_LoRA <Gallery /> ## Model description These are embracellm/sushi18_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Shrimp Tempura Crunch Roll to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](embracellm/sushi18_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of Shrimp Tempura Crunch Roll", "widget": []}
embracellm/sushi18_LoRA
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-05-01T07:32:12+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Abhaykoul/UNIQ
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:33:32+00:00
null
transformers
# Uploaded model - **Developed by:** jimdaro - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
jimdaro/lora_model_001
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:34:07+00:00
null
transformers
# Uploaded model - **Developed by:** arnav0204 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-2b-it-bnb-4bit"}
arnav0204/agrimodel
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-2b-it-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:35:27+00:00
null
null
{}
soaring0616/my_test_multiple_choice_model
null
[ "region:us" ]
null
2024-05-01T07:36:37+00:00
null
null
KetchupMix is a merger between MustardMix(wandaomix v4) and Mizu mixes v7. Permission was given by both model's author/owner for the merge. Please give a follow/support for the owners of the original models used. DaoOwoArts kaiyo This was supposed to be for personal use only but I thought of why not share it with others. v2 changes. Added some models in the merger BreakDomain by BD Dark Sushimix by Aitasai v3 changes. Added saki mix and played with some block weights v4 changes. forgot what I added v4 darker changes. a bit darker than the first test and has more of that line. DaoOwoArts for wandaomix v2 and mustardmix kaiyo for Mizu mixes v10 added Galena Redux for the texture. (will take down if the author wants to.) v5 changes. Added abysshellmaple into the mix v5 darker changes. Added Dark Sushi Mix by Aitasai
{"license": "creativeml-openrail-m", "tags": ["art"]}
NeverWinter13/KetchupMix
null
[ "art", "license:creativeml-openrail-m", "region:us" ]
null
2024-05-01T07:37:00+00:00
null
null
{"license": "apache-2.0"}
justyoung/rvcm
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-01T07:37:08+00:00
text-generation
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-4bit
null
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-05-01T07:39:40+00:00
null
null
{"license": "openrail"}
rezaazimisarteshnizi64/Reza
null
[ "license:openrail", "region:us" ]
null
2024-05-01T07:39:47+00:00
null
null
``` ./build/bin/main -m ./models/llama3_alpaca_dpo_GGUF-unsloth.F16.gguf \ -p '''Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n ### Instruction:\nWhy is the sky blue?\n\n ### Input:\n\n\n ### Response:\n''' ```
{"license": "apache-2.0"}
vincentoh/llama3-alpaca-GGUF
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-05-01T07:40:20+00:00
text-generation
transformers
{}
c-tawayip/decoder-t2sql-1.3b-instruct
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:40:41+00:00
null
peft
**Note**: This model card has been generated automatically according to the information the Trainer had access to. Visit the [model card](https://ritvik19.github.io/zephyr-mini/) to see the full description. # zephyr-tinyllama-sft-qlora This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: 1.1943 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 128 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1908 | 0.9991 | 570 | 1.1943 | ### Framework versions - PEFT 0.7.1 - Transformers 4.40.1 - Pytorch 2.1.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrachat_200k"], "base_model": "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "model-index": [{"name": "zephyr-tinyllama-sft-qlora", "results": []}]}
Ritvik19/zephyr-tinyllama-sft-qlora
null
[ "peft", "safetensors", "llama", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "license:apache-2.0", "region:us" ]
null
2024-05-01T07:40:58+00:00
text-classification
transformers
{"license": "unknown"}
amanda-901014/roberta-easy
null
[ "transformers", "pytorch", "roberta", "text-classification", "license:unknown", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:42:25+00:00
text-generation
transformers
{}
asucada/Llama-2-7b-chat-finetune
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:44:02+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
TTTTao725/molt5-augmented-contrastive-200-small-whole_model
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:45:04+00:00