modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-27 00:42:13
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
499 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-27 00:40:00
card
stringlengths
11
1.01M
CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo-awq
CorticalStack
2024-02-28T20:47:14Z
76
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-02-28T20:41:17Z
--- license: apache-2.0 --- <img src="neurotic-crown-clown-tak-stack.png" alt="Neurotic crown clown tak stack logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo-awq CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo-awq is an AWQ quantised version of [CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo](https://huggingface.co/CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ### AWQ configuration - Zero point: True - Q group size: 128 - W bit: 4 - Version: GEMM
faux-monke/LunarLander_DeepRL
faux-monke
2024-02-28T20:40:31Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-25T20:25:55Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 287.87 +/- 25.46 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ahmedfaiyaz/OkkhorDiffusion-CMATERdb
ahmedfaiyaz
2024-02-28T20:39:48Z
40
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "bn", "license:cc-by-nc-4.0", "diffusers:OkkhorDiffusionPipeline", "region:us" ]
text-to-image
2024-02-27T05:36:08Z
--- license: cc-by-nc-4.0 language: - bn library_name: diffusers pipeline_tag: text-to-image inference: false --- # Okkhor Diffusion Okkhor Diffusion is a category of Denoising Diffusion Probabilistic Models designed to generate images of Bangla handwritten characters. This model card corresponds to Okkhor-Diffusion trained on **CMATERdb dataset**. ## Variants - [Okkhor Diffusion trained on Banglalekha-Isolated](https://huggingface.co/ahmedfaiyaz/OkkhorDiffusion) - [Okkhor Diffusion trained on CMATERdb](https://huggingface.co/ahmedfaiyaz/OkkhorDiffusion-CMATERdb) - [Okkhor Diffusion trained on Ekush](https://huggingface.co/ahmedfaiyaz/OkkhorDiffusion-Ekush) ### Usage ```py from diffusers import DiffusionPipeline import torch device="cuda" pipeline = DiffusionPipeline.from_pretrained( "ahmedfaiyaz/OkkhorDiffusion-CMATERdb", custom_pipeline="ahmedfaiyaz/OkkhorDiffusion", embedding=torch.int16 ) pipeline.to(device) pipeline.embedding=torch.tensor([0],device=device) # 'প্র': 0 pipeline(batch_size=1,num_inference_steps=100).images[0] ``` |OkkhorDiffusion-CMATERdb|| |---------|--------| |Character| Serial | |প্র|0| |ঙ্গ|1| |ক্ষ|2| |ত্র|3| |ন্দ|4| |চ্ছ|5| |ন্ত|6| |ন্দ্র|7| |স্ত|8| |ন্তু|9| |গ্র|10| |স্থ|11| |স্ট|12| |ম্ব|13| |স্ব|14| |ত্ত|15| |ক্ত|16| |ন্ট|17| |ল্প|18| |ষ্ট|19| |ন্ত্র|20| |ক্র|21| |ন্ন|22| |দ্ধ|23| |ন্ধ|24| |ঙ্ক|25| |ন্ড|26| |ফ্র|27| |ম্প|28| |স্ক|29| |জ্ঞ|30| |ক্ট|31| |শ্চ|32| |ট্র|33| |ত্ব|34| |ল্ল|35| |ব্র|36| |ঞ্চ|37| |ণ্ড|38| |ক্স|39| |শ্র|40| |দ্র|41| |স্প|42| |ঞ্জ|43| |ন্স|44| |ম্ভ|45| |শ্ব|46| |ব্দ|47| |শ্ন|48| |প্প|49| |ব্ল|50| |প্ত|51| |ক্ল|52| |ষ্ট্র|53| |দ্ব|54| |ট্ট|55| |গ্ল|56| |ল্ট|57| |ষ্ঠ|58| |স্ত্র|59| |প্ল|60| |চ্চ|61| |স্ম|62| |দ্দ|63| |গ্ন|64| |জ্ব|65| |ষ্ক|66| |ত্ম|67| |ড্র|68| |ম্ম|69| |ণ্ট|70| |ম্প্র|71| |প্ন|72| |ন্ম|73| |স্ফ|74| |ল্দ|75| |ত্ত্ব|76| |জ্জ|77| |ক্ষ্ম|78| |ষ্ণ|79| |ন্ব|80| |ক্ক|81| |ন্থ|82| |ড্ড|83| |ব্ব|84| |ন্ট্র|85| |ণ্ঠ|86| |প্ট|87| |স্তু|88| |ধ্ব|89| |হ্ণ|90| |ভ্র|91| |ল্ক|92| |স্ল|93| |হ্ন|94| |ত্ন|95| |ষ্ক্র|96| |ঘ্র|97| |দ্ভ|98| |শ্ল|99| |ব্ধ|100| |ষ্ম|101| |স্ক্র|102| |ড়্গ|103| |জ্জ্ব|104| |শ্ম|105| |দ্ম|106| |ক্ব|107| |ম্র|108| |গ্ধ|109| |ব্জ|110| |স্ন|111| |ন্দ্ব|112| |হ্ম|113| |ঙ্ঘ|114| |খ্র|115| |ত্থ|116| |ল্ব|117| |ম্ন|118| |ঘ্ন|119| |গ্গ|120| |ক্ষ্ণ|121| |গ্রু|122| |চ্ছ্ব|123| |ণ্ণ|124| |ল্ম|125| |স্র|126| |ম্ল|127| |ষ্প্র|128| |ঞ্ঝ|129| |স্প্র|130| |ম্ভ্র|131| |ষ্প|132| |ঙ্খ|133| |জ্র|134| |গ্ব|135| |থ্ব|136| |ণ্ব|137| |হ্ব|138| |দ্দ্ব|139| |দ্ঘ|140| |ধ্র|141| |হ্ল|142| |গ্ম|143| |ল্গ|144| |স্খ|145| |থ্র|146| |ন্ধ্র|147| |ফ্ল|148| |ঙ্ক্ষ|149| |ণ্ম|150| |ঞ্ছ|151| |ম্ফ|152| |হ্র|153| |প্রু|154| |ত্রু|155| |ভ্ল|156| |শ্রু|157| |দ্রু|158| |ঙ্ম|159| |ক্ম|160| |দ্গ|161| |ন্ড্র|162| |ট্ব|163| |চ্ঞ|164| |প্স|165| |ল্ড|166| |ষ্ফ|167| |শ্ছ|168| |জ্ঝ|169| |স্ট্র|170| |অ|171| |আ|172| |ই|173| |ঈ|174| |উ|175| |ঊ|176| |ঋ|177| |এ|178| |ঐ|179| |ও|180| |ঔ|181| |ক|182| |খ|183| |গ|184| |ঘ|185| |ঙ|186| |চ|187| |ছ|188| |জ|189| |ঝ|190| |ঞ|191| |ট|192| |ঠ|193| |ড|194| |ঢ|195| |ণ|196| |ত|197| |থ|198| |দ|199| |ধ|200| |ন|201| |প|202| |ফ|203| |ব|204| |ভ|205| |ম|206| |য|207| |র|208| |ল|209| |শ|210| |ষ|211| |স|212| |হ|213| |ড়|214| |ঢ়|215| |য়|216| |ৎ|217| |ং|218| |ঃ|219| |ঁ|220| # Citation ``` @ARTICLE{10445466, author={Fuad, Md Mubtasim and Faiyaz, A. and Arnob, Noor Mairukh Khan and Mridha, M.F. and Saha, Aloke Kumar and Aung, Zeyar}, journal={IEEE Access}, title={Okkhor-Diffusion: Class Guided Generation of Bangla Isolated Handwritten Characters using Denoising Diffusion Probabilistic Model (DDPM)}, year={2024}, volume={}, number={}, pages={1-1}, abstract={Bangla has a unique script with a complex set of characters, making it a fascinating subject of study for linguists and cultural enthusiasts. Unique in some of its similar characters which are only distinguishable by subtle differences in their shapes and diacritics, there has been a notable increase in research on Bangla character recognition and classification using machine learning-based approaches. However, Handwritten Bangla Character Recognition (HBCR) training requires an adequate amount of data from a diversely distributed dataset. Making diverse datasets for HBCR training is a challenging and tedious task to carry out. Yet, there is limited research on the automatic generation of handwritten Bangla characters. Motivated by this open area of research, this paper proposes a novel approach ’Okkhor-Diffusion’ for class-guided generation of Bangla isolated handwritten characters using a novel Denoising Diffusion Probabilistic Model (DDPM). No prior research has used DDPM for this purpose, making the proposed approach novel. The DDPM is a generative model that uses a diffusion process to transform noise-corrupted data into diverse samples; despite being trained on a small training set. In our experiments, StyleGAN2-ADA had notably inferior performance compared to Okkhor-Diffusion in generating realistic isolated handwritten Bangla characters. Experimental results on the BanglaLekha-Isolated dataset demonstrate that the proposed Okkhor-Diffusion model generates realistic isolated handwritten Bangla characters, with a mean Multi-Scale Structural Similarity Index Measure (MS-SSIM) score of 0.178 compared to 0.177 for the real samples. The Fréchet Inception Distance (FID) score for the synthetic handwritten Bangla characters is 5.426. Finally, the newly proposed Bangla Character Aware Fréchet Inception Distance (BCAFID) score of the proposed Okkhor-Diffusion model is 10.388.}, keywords={Deep learning;Handwritten character generation;Generative Model;Denoising Diffusion Probabilistic Model}, doi={10.1109/ACCESS.2024.3370674}, ISSN={2169-3536}, month={},} ```
adriana98/whisper-v3-LORA-spanish
adriana98
2024-02-28T20:38:09Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-27T21:43:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SethGA/neocortex-grounded
SethGA
2024-02-28T20:33:52Z
3
1
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Llama-2-7b-hf", "base_model:adapter:NousResearch/Llama-2-7b-hf", "4-bit", "bitsandbytes", "region:us" ]
null
2024-02-28T19:03:20Z
--- library_name: peft tags: - axolotl - generated_from_trainer base_model: NousResearch/Llama-2-7b-hf model-index: - name: neocortex-grounded results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: NousResearch/Llama-2-7b-hf model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer is_llama_derived_model: true hub_model_id: neocortex-grounded load_in_8bit: false load_in_4bit: true strict: false datasets: - path: SethGA/neocortex_grounded_23k type: alpaca shards: 20 dataset_prepared_path: val_set_size: 0.05 output_dir: ./qlora-out adapter: qlora lora_model_dir: sequence_len: 4096 sample_packing: false eval_sample_packing: false pad_to_sequence_len: true lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out: wandb_project: neocortex wandb_entity: wandb_watch: wandb_run_id: wandb_log_model: checkpoint gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 3 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 eval_steps: 20 eval_table_size: 5 save_strategy: epoch save_steps: debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>" ``` </details><br> # neocortex-grounded This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1270 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7091 | 0.01 | 1 | 1.7034 | | 1.3312 | 0.29 | 20 | 1.2385 | | 1.1599 | 0.58 | 40 | 1.1702 | | 1.1673 | 0.87 | 60 | 1.1425 | | 1.0802 | 1.16 | 80 | 1.1291 | | 1.0736 | 1.45 | 100 | 1.1238 | | 1.0308 | 1.74 | 120 | 1.1185 | | 1.0042 | 2.03 | 140 | 1.1110 | | 0.997 | 2.32 | 160 | 1.1274 | | 0.8535 | 2.61 | 180 | 1.1278 | | 0.9331 | 2.9 | 200 | 1.1270 | ### Framework versions - PEFT 0.9.0 - Transformers 4.39.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.17.1 - Tokenizers 0.15.0
dmusingu/luganda_wav2vec2_ctc_reg
dmusingu
2024-02-28T20:23:29Z
4
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_7_0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-26T13:49:15Z
--- tags: - generated_from_trainer datasets: - common_voice_7_0 model-index: - name: luganda_wav2vec2_ctc_reg results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # luganda_wav2vec2_ctc_reg This model was trained from scratch on the common_voice_7_0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
bartowski/Ignis-7B-DPO-Laser-exl2
bartowski
2024-02-28T20:22:28Z
0
0
null
[ "text-generation", "license:apache-2.0", "region:us" ]
text-generation
2024-02-28T20:07:48Z
--- license: apache-2.0 quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of Ignis-7B-DPO-Laser Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.14">turboderp's ExLlamaV2 v0.0.14</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/NeuralNovel/Ignis-7B-DPO-Laser/ | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/Ignis-7B-DPO-Laser-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/Ignis-7B-DPO-Laser-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/Ignis-7B-DPO-Laser-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/Ignis-7B-DPO-Laser-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/Ignis-7B-DPO-Laser-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Ignis-7B-DPO-Laser-exl2 Ignis-7B-DPO-Laser-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Ignis-7B-DPO-Laser-exl2`: ```shell mkdir Ignis-7B-DPO-Laser-exl2 huggingface-cli download bartowski/Ignis-7B-DPO-Laser-exl2 --local-dir Ignis-7B-DPO-Laser-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir Ignis-7B-DPO-Laser-exl2-6_5 huggingface-cli download bartowski/Ignis-7B-DPO-Laser-exl2 --revision 6_5 --local-dir Ignis-7B-DPO-Laser-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir Ignis-7B-DPO-Laser-exl2-6.5 huggingface-cli download bartowski/Ignis-7B-DPO-Laser-exl2 --revision 6_5 --local-dir Ignis-7B-DPO-Laser-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
LoneStriker/Mixtral-8x7B-Holodeck-v1-4.0bpw-h6-exl2
LoneStriker
2024-02-28T20:16:50Z
6
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "pytorch", "fine-tuned", "moe", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T20:07:10Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - pytorch - mixtral - fine-tuned - moe --- # Mixtral 8x7B - Holodeck ## Model Description Mistral 7B-Holodeck is a finetune created using Mixtral's 8x7B model. ## Training data The training data contains around 3000 ebooks in various genres. Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]` *** ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
mlwohls/taxi
mlwohls
2024-02-28T20:15:12Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-28T20:15:10Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.46 +/- 2.82 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="mlwohls/taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
rbtprograms/merged_mistral_base_math
rbtprograms
2024-02-28T20:08:58Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2203.05482", "base_model:WizardLMTeam/WizardMath-7B-V1.1", "base_model:merge:WizardLMTeam/WizardMath-7B-V1.1", "base_model:mistralai/Mistral-7B-v0.1", "base_model:merge:mistralai/Mistral-7B-v0.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:56:05Z
--- base_model: - WizardLM/WizardMath-7B-V1.1 - mistralai/Mistral-7B-v0.1 library_name: transformers tags: - mergekit - merge --- # merged_2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: float16 merge_method: linear slices: - sources: - layer_range: [0, 32] model: model: path: mistralai/Mistral-7B-v0.1 parameters: weight: 1.0 - layer_range: [0, 32] model: model: path: WizardLM/WizardMath-7B-V1.1 parameters: weight: 0.0 ```
LoneStriker/Mixtral-8x7B-Holodeck-v1-3.75bpw-h6-exl2
LoneStriker
2024-02-28T20:07:09Z
8
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "pytorch", "fine-tuned", "moe", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:58:07Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - pytorch - mixtral - fine-tuned - moe --- # Mixtral 8x7B - Holodeck ## Model Description Mistral 7B-Holodeck is a finetune created using Mixtral's 8x7B model. ## Training data The training data contains around 3000 ebooks in various genres. Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]` *** ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
Qiao121/pokemon-lora
Qiao121
2024-02-28T19:58:14Z
1
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-02-28T09:54:36Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true base_model: runwayml/stable-diffusion-v1-5 --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - Qiao121/pokemon-lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
LoneStriker/Mixtral-8x7B-Holodeck-v1-3.5bpw-h6-exl2
LoneStriker
2024-02-28T19:58:05Z
6
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "pytorch", "fine-tuned", "moe", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:49:40Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - pytorch - mixtral - fine-tuned - moe --- # Mixtral 8x7B - Holodeck ## Model Description Mistral 7B-Holodeck is a finetune created using Mixtral's 8x7B model. ## Training data The training data contains around 3000 ebooks in various genres. Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]` *** ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
Angel071816/mi-super-angelo
Angel071816
2024-02-28T19:52:03Z
163
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T19:51:14Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: mi-super-angelo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mi-super-angelo This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
marisabatalla/autotrain-2t8zz-mwbyj
marisabatalla
2024-02-28T19:50:17Z
191
0
transformers
[ "transformers", "safetensors", "mobilenet_v1", "image-classification", "autotrain", "dataset:autotrain-2t8zz-mwbyj/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-28T19:50:15Z
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace datasets: - autotrain-2t8zz-mwbyj/autotrain-data --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 0.577736496925354 f1: 1.0 precision: 1.0 recall: 1.0 auc: 0.0 accuracy: 1.0
LoneStriker/Mixtral-8x7B-Holodeck-v1-3.0bpw-h6-exl2
LoneStriker
2024-02-28T19:49:37Z
6
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "pytorch", "fine-tuned", "moe", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:34:34Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - pytorch - mixtral - fine-tuned - moe --- # Mixtral 8x7B - Holodeck ## Model Description Mistral 7B-Holodeck is a finetune created using Mixtral's 8x7B model. ## Training data The training data contains around 3000 ebooks in various genres. Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]` *** ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
lunarsylph/gemmacell_v4
lunarsylph
2024-02-28T19:44:34Z
119
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:33:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jahongir94/uzbert
Jahongir94
2024-02-28T19:18:18Z
0
0
transformers
[ "transformers", "text-classification", "dataset:Ravshan/kun_uz_news", "dataset:s3h/custom-qalb-classification", "dataset:tahrirchi/uz-crawl", "dataset:tahrirchi/uz-books", "dataset:latofat/uzpos", "dataset:Sanatbek/uzbek-kazakh-parallel-corpora", "dataset:elmurod1202/uzbek-sentiment-analysis", "dataset:murodbek/uz-text-classification", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T19:07:49Z
--- datasets: - Ravshan/kun_uz_news - s3h/custom-qalb-classification - tahrirchi/uz-crawl - tahrirchi/uz-books - latofat/uzpos - Sanatbek/uzbek-kazakh-parallel-corpora - elmurod1202/uzbek-sentiment-analysis - murodbek/uz-text-classification metrics: - bertscore library_name: transformers pipeline_tag: text-classification ---
ArchiveAI/Thespis-CurtainCall-7b-v0.1.2
ArchiveAI
2024-02-28T19:17:53Z
1
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:17:53Z
--- license: cc-by-nc-4.0 --- This model is the first in a series of experiments to make my models a bit smarter. Its nowhere near done, but my initial testing was good so I'm uploading so people can check it out. Datasets Used: * OpenOrcaSlim * Dolphin * Capybara * Augmental * ToxicQA * Magiccoder-Evol-Instruct-110k ## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template ) ``` {System Prompt} Username: {Input} BotName: {Response} Username: {Input} BotName: {Response} ``` ## Recommended Silly Tavern Preset -> (Temp: 1.25, MinP: 0.1, RepPen: 1.03) ## Recommended Kobold Horde Preset -> MinP
ArchiveAI/Thespis-CurtainCall-7b-v0.2.1
ArchiveAI
2024-02-28T19:17:27Z
1
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:17:27Z
--- license: cc-by-nc-4.0 --- ## Outdated, please use https://huggingface.co/cgato/Thespis-CurtainCall-7b-v0.2.2 This model is the first in a series of experiments to make my models a bit smarter. Its nowhere near done, but my initial testing was good so I'm uploading so people can check it out. Datasets Used: * Dolphin * Ultrachat * Capybara * Augmental * ToxicQA * Magiccoder-Evol-Instruct-110k * Yahoo Answers * OpenOrca * Airoboros 3.1 * grimulkan/physical-reasoning and theory-of-mind ## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template ) ``` {System Prompt} Username: {Input} BotName: {Response} Username: {Input} BotName: {Response} ``` ## Recommended Silly Tavern Preset -> (Temp: 1.25, MinP: 0.1, RepPen: 1.03) ## Recommended Kobold Horde Preset -> MinP
ArchiveAI/Thespis-CurtainCall-7b-v0.2.2
ArchiveAI
2024-02-28T19:17:19Z
1
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:17:19Z
--- license: cc-by-nc-4.0 --- I'm happy with where this model currently is so I am releasing the 7b for testing. If I get good feedback I'll be scaling up to larger models! Thank you! Datasets Used: * Dolphin * Ultrachat * Capybara * Augmental * ToxicQA * Magiccoder-Evol-Instruct-110k * Yahoo Answers * OpenOrca * Airoboros 3.1 * grimulkan/physical-reasoning and theory-of-mind ## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template ) ``` {System Prompt} Username: {Input} BotName: {Response} Username: {Input} BotName: {Response} ``` ## Recommended Silly Tavern Preset -> (Temp: 1.25, MinP: 0.1, RepPen: 1.03) ## Recommended Kobold Horde Preset -> MinP
Lienid/nous-thirteen
Lienid
2024-02-28T19:12:12Z
115
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T18:09:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlwohls/ppo-Huggy
mlwohls
2024-02-28T19:08:40Z
6
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-02-28T19:08:37Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: mlwohls/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
liminerity/ultra0-half-the-layers
liminerity
2024-02-28T19:07:06Z
115
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "liminerity/ultra0", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:05:38Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - liminerity/ultra0 - liminerity/ultra0 --- # ultra0-half-the-layers ultra0-half-the-layers is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [liminerity/ultra0](https://huggingface.co/liminerity/ultra0) * [liminerity/ultra0](https://huggingface.co/liminerity/ultra0) ## 🧩 Configuration ```yaml slices: - sources: - model: liminerity/ultra0 layer_range: [0, 12] - model: liminerity/ultra0 layer_range: [0, 12] merge_method: slerp base_model: liminerity/ultra0 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
tanatapanun/fine-tuned-BART-20-epochs-wanglab-512-output
tanatapanun
2024-02-28T18:58:50Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-28T18:21:39Z
--- base_model: bart-base tags: - generated_from_trainer metrics: - rouge model-index: - name: fine-tuned-BART-20-epochs-wanglab-512-output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-BART-20-epochs-wanglab-512-output This model is a fine-tuned version of [bart-base](https://huggingface.co/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4945 - Rouge1: 0.0871 - Rouge2: 0.0196 - Rougel: 0.0787 - Rougelsum: 0.0787 - Bertscore F1: 0.837 - Bleurt Score: -1.873 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bertscore F1 | Bleurt Score | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:------------:|:------------:|:-------:| | No log | 1.0 | 301 | 1.5515 | 0.0293 | 0.0 | 0.0286 | 0.0282 | 0.7994 | -2.1159 | 11.68 | | 4.8736 | 2.0 | 602 | 0.5364 | 0.0738 | 0.0183 | 0.0655 | 0.0653 | 0.8345 | -1.6735 | 20.0 | | 4.8736 | 3.0 | 903 | 0.4811 | 0.071 | 0.0191 | 0.0677 | 0.0677 | 0.8359 | -1.7563 | 20.0 | | 0.5377 | 4.0 | 1204 | 0.4621 | 0.0506 | 0.0125 | 0.0475 | 0.0474 | 0.8566 | -1.8275 | 8.0 | | 0.4145 | 5.0 | 1505 | 0.4496 | 0.0231 | 0.0036 | 0.0237 | 0.0233 | 0.8458 | -1.4636 | 8.0 | | 0.4145 | 6.0 | 1806 | 0.4455 | 0.078 | 0.0194 | 0.0714 | 0.071 | 0.8469 | -1.3815 | 20.0 | | 0.336 | 7.0 | 2107 | 0.4416 | 0.0871 | 0.0196 | 0.0787 | 0.0787 | 0.837 | -1.873 | 20.0 | | 0.336 | 8.0 | 2408 | 0.4440 | 0.0878 | 0.0195 | 0.0794 | 0.0791 | 0.8409 | -1.4561 | 20.0 | | 0.2698 | 9.0 | 2709 | 0.4505 | 0.0231 | 0.0036 | 0.0237 | 0.0233 | 0.8458 | -1.4636 | 8.0 | | 0.2225 | 10.0 | 3010 | 0.4546 | 0.0516 | 0.0101 | 0.0466 | 0.0463 | 0.8355 | -1.61 | 20.0 | | 0.2225 | 11.0 | 3311 | 0.4627 | 0.0877 | 0.0194 | 0.0794 | 0.0791 | 0.8388 | -1.4342 | 20.0 | | 0.1695 | 12.0 | 3612 | 0.4677 | 0.0704 | 0.0128 | 0.0628 | 0.0626 | 0.8218 | -1.8469 | 20.0 | | 0.1695 | 13.0 | 3913 | 0.4716 | 0.0615 | 0.0193 | 0.056 | 0.0557 | 0.8342 | -1.5375 | 20.0 | | 0.132 | 14.0 | 4214 | 0.4754 | 0.064 | 0.0196 | 0.0577 | 0.0576 | 0.839 | -1.8751 | 20.0 | | 0.1122 | 15.0 | 4515 | 0.4837 | 0.0712 | 0.0175 | 0.0644 | 0.0642 | 0.8373 | -1.3366 | 20.0 | | 0.1122 | 16.0 | 4816 | 0.4867 | 0.0817 | 0.01 | 0.0691 | 0.069 | 0.8425 | -1.4584 | 20.0 | | 0.0893 | 17.0 | 5117 | 0.4904 | 0.0712 | 0.0175 | 0.0644 | 0.0642 | 0.8373 | -1.3366 | 20.0 | | 0.0893 | 18.0 | 5418 | 0.4924 | 0.0871 | 0.0196 | 0.0787 | 0.0787 | 0.837 | -1.873 | 20.0 | | 0.08 | 19.0 | 5719 | 0.4934 | 0.0871 | 0.0196 | 0.0787 | 0.0787 | 0.837 | -1.873 | 20.0 | | 0.0706 | 20.0 | 6020 | 0.4945 | 0.0871 | 0.0196 | 0.0787 | 0.0787 | 0.837 | -1.873 | 20.0 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
ThuyNT03/CS505_COQE_viT5_Prompting5_ASPOL_vtune_2
ThuyNT03
2024-02-28T18:51:45Z
103
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-28T16:31:49Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: CS505_COQE_viT5_Prompting5_ASPOL_vtune_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_Prompting5_ASPOL_vtune_2 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.1
sarak7/H15_228_769_v1
sarak7
2024-02-28T18:51:37Z
171
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T18:49:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Weni/ZeroShot-3.3.12-Mistral-7b-Multilanguage-3.2.0-merged
Weni
2024-02-28T18:43:06Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T18:32:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mohitpc10e/milky-way-galexy
mohitpc10e
2024-02-28T18:41:23Z
2
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-28T18:34:36Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### milky-way-galexy Dreambooth model trained by mohitpc10e following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: 23CYBER36 Sample pictures of this concept: ![0](https://huggingface.co/mohitpc10e/milky-way-galexy/resolve/main/sample_images/xzg_(3).jpeg) ![1](https://huggingface.co/mohitpc10e/milky-way-galexy/resolve/main/sample_images/xzg_(5).jpeg) ![2](https://huggingface.co/mohitpc10e/milky-way-galexy/resolve/main/sample_images/xzg_(01).jpeg) ![3](https://huggingface.co/mohitpc10e/milky-way-galexy/resolve/main/sample_images/xzg(6).jpeg) ![4](https://huggingface.co/mohitpc10e/milky-way-galexy/resolve/main/sample_images/xzg_(4).jpeg) ![5](https://huggingface.co/mohitpc10e/milky-way-galexy/resolve/main/sample_images/xzg_(2).jpeg)
tomaszki/gemma-7
tomaszki
2024-02-28T18:27:10Z
119
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T18:19:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lollitor/Sequential9
Lollitor
2024-02-28T18:22:18Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-28T18:22:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
EchineF/Reinforce-pixelcopter-3
EchineF
2024-02-28T18:20:24Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-28T18:20:16Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopter-3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 22.00 +/- 15.77 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
tanatapanun/fine-tuned-BioBART-20-epochs-wanglab-512-output
tanatapanun
2024-02-28T18:15:50Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-28T16:13:44Z
--- base_model: checkpoint_global_step_200000 tags: - generated_from_trainer metrics: - rouge model-index: - name: fine-tuned-BioBART-20-epochs-wanglab-512-output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-BioBART-20-epochs-wanglab-512-output This model is a fine-tuned version of [checkpoint_global_step_200000](https://huggingface.co/checkpoint_global_step_200000) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5129 - Rouge1: 0.1378 - Rouge2: 0.0327 - Rougel: 0.1228 - Rougelsum: 0.1247 - Bertscore F1: 0.8584 - Bleurt Score: -1.101 - Gen Len: 15.43 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bertscore F1 | Bleurt Score | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:------------:|:------------:|:-------:| | No log | 1.0 | 301 | 1.8539 | 0.0606 | 0.001 | 0.0558 | 0.0559 | 0.7863 | -1.8703 | 9.32 | | 5.1705 | 2.0 | 602 | 0.5409 | 0.1074 | 0.0308 | 0.092 | 0.0915 | 0.8311 | -1.6325 | 18.79 | | 5.1705 | 3.0 | 903 | 0.4864 | 0.0636 | 0.0126 | 0.0587 | 0.0583 | 0.8346 | -1.2607 | 14.54 | | 0.5346 | 4.0 | 1204 | 0.4678 | 0.0756 | 0.0159 | 0.0708 | 0.0699 | 0.8573 | -1.6843 | 9.58 | | 0.4154 | 5.0 | 1505 | 0.4541 | 0.1071 | 0.0173 | 0.1 | 0.0995 | 0.8577 | -1.2599 | 13.88 | | 0.4154 | 6.0 | 1806 | 0.4506 | 0.1012 | 0.0175 | 0.092 | 0.0919 | 0.859 | -1.3279 | 13.01 | | 0.336 | 7.0 | 2107 | 0.4482 | 0.1314 | 0.0242 | 0.1186 | 0.1184 | 0.856 | -1.2649 | 16.51 | | 0.336 | 8.0 | 2408 | 0.4509 | 0.1292 | 0.019 | 0.1097 | 0.109 | 0.8495 | -1.1181 | 14.43 | | 0.2677 | 9.0 | 2709 | 0.4576 | 0.1013 | 0.0268 | 0.0895 | 0.0887 | 0.8491 | -1.1989 | 15.31 | | 0.2183 | 10.0 | 3010 | 0.4697 | 0.1216 | 0.0263 | 0.11 | 0.1117 | 0.8562 | -1.1982 | 14.11 | | 0.2183 | 11.0 | 3311 | 0.4720 | 0.1152 | 0.0289 | 0.1 | 0.0988 | 0.8528 | -1.1958 | 14.86 | | 0.1617 | 12.0 | 3612 | 0.4765 | 0.1114 | 0.0274 | 0.0951 | 0.0947 | 0.8546 | -1.1548 | 15.64 | | 0.1617 | 13.0 | 3913 | 0.4849 | 0.1135 | 0.0266 | 0.0922 | 0.0916 | 0.8556 | -1.1506 | 15.07 | | 0.1208 | 14.0 | 4214 | 0.4893 | 0.1321 | 0.0368 | 0.1168 | 0.119 | 0.8568 | -1.11 | 15.42 | | 0.0981 | 15.0 | 4515 | 0.4998 | 0.1339 | 0.0243 | 0.1122 | 0.1119 | 0.8549 | -1.1164 | 14.91 | | 0.0981 | 16.0 | 4816 | 0.5008 | 0.1494 | 0.0338 | 0.1262 | 0.1266 | 0.8584 | -1.0695 | 15.47 | | 0.0725 | 17.0 | 5117 | 0.5069 | 0.1403 | 0.0355 | 0.1161 | 0.1169 | 0.855 | -1.0642 | 15.93 | | 0.0725 | 18.0 | 5418 | 0.5078 | 0.1449 | 0.0383 | 0.1265 | 0.1283 | 0.8576 | -1.0558 | 16.01 | | 0.0622 | 19.0 | 5719 | 0.5113 | 0.1368 | 0.0338 | 0.1216 | 0.1235 | 0.8573 | -1.0761 | 15.51 | | 0.0517 | 20.0 | 6020 | 0.5129 | 0.1378 | 0.0327 | 0.1228 | 0.1247 | 0.8584 | -1.101 | 15.43 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
mi-rei/Cthalpaca-llama2-7b-CT_III_efficient_full
mi-rei
2024-02-28T18:13:35Z
1
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T17:55:43Z
Accuracy: 0.690\ F1 Score: 0.703\ Accuracy for label 0: 0.648\ Accuracy for label 1: 0.731 Classification Report: | | precision | recall | f1-score | support | |--------------|-----------|--------|----------|---------| | 0 | 0.70 | 0.65 | 0.67 | 548 | | 1 | 0.68 | 0.73 | 0.70 | 554 | | accuracy | | | 0.69 | 1102 | | macro avg | 0.69 | 0.69 | 0.69 | 1102 | | weighted avg | 0.69 | 0.69 | 0.69 | 1102 | Confusion Matrix:\ [[355 193 0]\ [149 405 0]\ [ 0 0 0]]
mi-rei/Cthalpaca-llama2-7b
mi-rei
2024-02-28T18:13:13Z
1
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T09:34:44Z
Trained on the first 50k rows of - mi-rei/ClinicalTrial-gov-LLaMA
arcee-ai/gemma-7b-slerp
arcee-ai
2024-02-28T18:12:12Z
13
1
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "merge", "mergekit", "google/gemma-7b-it", "google/gemma-7b", "base_model:google/gemma-7b", "base_model:finetune:google/gemma-7b", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-27T20:40:47Z
--- library_name: transformers license: apache-2.0 base_model: - google/gemma-7b merge-model: - google/gemma-7b-it tags: - merge - mergekit - google/gemma-7b-it - google/gemma-7b --- ![image/webp](https://plus.unsplash.com/premium_photo-1664526284199-e36d32a3941d?w=800&auto=format&fit=crop&q=60&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxzZWFyY2h8MTN8fHNtYWxsZXJ8ZW58MHx8MHx8fDA%3D) # Gemma-7B-slerp This model is a merge of Gemma 7b base and 7b-instruct, using the Slerp merging method. Test-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) * [google/gemma-7b](https://huggingface.co/google/gemma-7b) ## 🏆 Evaluation ### Nous Gemma-7B-slerp's Nous' benchmark suite (evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval)). | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [arcee-ai/Gemma-7B-slerp](https://huggingface.co/arcee-ai/gemma-7b-slerp) [📄](https://gist.github.com/shamanez/4c18f8d79747d4019ecf6d5ce098cf72) | 34.14 | 23.86 | 36.55 | 46.22 | 29.94 | ## 🧩 Configuration Slerp YAML Config ```yaml slices: - sources: - model: google/gemma-7b-it layer_range: [0, 28] - model: google/gemma-7b layer_range: [0, 28] merge_method: slerp base_model: google/gemma-7b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
Manish0611/phi2-code
Manish0611
2024-02-28T18:05:32Z
52
1
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-06T10:15:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ryusangwon/9213_Llama-2-7b-hf
ryusangwon
2024-02-28T18:01:25Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-02-28T18:01:15Z
--- base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer model-index: - name: 9213_Llama-2-7b-hf results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 9213_Llama-2-7b-hf This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.4.0 - Transformers 4.36.2 - Pytorch 2.0.1+cu117 - Datasets 2.15.0 - Tokenizers 0.15.0
accidentalgenuis99/sports-stats-questions-classifier
accidentalgenuis99
2024-02-28T17:54:12Z
106
1
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "autotrain", "en", "dataset:autotrain-nndq9-1xgjv/autotrain-data", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T17:33:49Z
--- tags: - autotrain - text-classification widget: - text: What are the latest updates on the NBA trade deadline? datasets: - autotrain-nndq9-1xgjv/autotrain-data license: mit language: - en pipeline_tag: text-classification --- # Sports Stats Questions Classifier description: Welcome to the Sports Stats Questions Classifier! This NLP-based text classification tool is designed to classify sports-related questions into different categories based on their content. Whether you're a sports enthusiast, journalist, or data analyst, this tool can help you quickly categorize and organize sports-related queries with ease. overview: This project utilizes natural language processing (NLP) techniques to understand and classify questions related to sports statistics. By analyzing the text of the questions, the classifier assigns them to predefined categories, such as "scores," "players," "stats," "teams," "games," "standings," "schedules," "rosters," or "news." features: - Text Classification: Classify sports stats questions into predefined categories. - Fast and Efficient: Utilizes state-of-the-art NLP models for quick and accurate classification. - Easy Integration: Can be integrated into various applications, websites, or chatbots for seamless user interaction. - Customizable: Easily extend or modify the categories and training data to suit your specific needs. - User-Friendly: Simple and intuitive interface for easy usage by both developers and end-users. ontributing: | Contributions are welcome! If you have any suggestions, feature requests, or bug reports, please open an issue or submit a pull request on GitHub. license: MIT contact: | Feel free to reach out to us with any questions, feedback, or collaboration opportunities. Happy classifying! 🏀🏈⚽️ --- tags: - autotrain - text-classification widget: - text: "What are the latest updates on the NBA trade deadline?" datasets: - autotrain-nndq9-1xgjv/autotrain-data --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 1.0594602823257446 f1_macro: 0.703125 f1_micro: 0.8518518518518519 f1_weighted: 0.7916666666666666 precision_macro: 0.6722222222222223 precision_micro: 0.8518518518518519 precision_weighted: 0.7497942386831277 recall_macro: 0.75 recall_micro: 0.8518518518518519 recall_weighted: 0.8518518518518519 accuracy: 0.8518518518518519
LoneStriker/Brezn-7b-6.0bpw-h6-exl2
LoneStriker
2024-02-28T17:52:21Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "FelixChao/WestSeverus-7B-DPO-v2", "mayflowergmbh/Wiedervereinigung-7b-dpo-laser", "cognitivecomputations/openchat-3.5-0106-laser", "🥨", "🍻", "de", "base_model:PetroGPT/WestSeverus-7B-DPO-v2", "base_model:merge:PetroGPT/WestSeverus-7B-DPO-v2", "base_model:cognitivecomputations/openchat-3.5-0106-laser", "base_model:merge:cognitivecomputations/openchat-3.5-0106-laser", "base_model:mayflowergmbh/Wiedervereinigung-7b-dpo-laser", "base_model:merge:mayflowergmbh/Wiedervereinigung-7b-dpo-laser", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T17:50:00Z
--- tags: - merge - mergekit - lazymergekit - FelixChao/WestSeverus-7B-DPO-v2 - mayflowergmbh/Wiedervereinigung-7b-dpo-laser - cognitivecomputations/openchat-3.5-0106-laser - 🥨 - 🍻 base_model: - FelixChao/WestSeverus-7B-DPO-v2 - mayflowergmbh/Wiedervereinigung-7b-dpo-laser - cognitivecomputations/openchat-3.5-0106-laser license: apache-2.0 language: - de --- # 🥨 Brezn-7B This is right now our best performing german speaking 7B model with an apache license, with an average of 7.49 on mt-bench-de. You can test this model here: [mayflowergmbh/Brezn-7B-GGUF-Chat](https://huggingface.co/spaces/mayflowergmbh/Brezn-7B-GGUF-Chat). Brezn-7B is a dpo aligned merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2) * [mayflowergmbh/Wiedervereinigung-7b-dpo-laser](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo-laser) * [cognitivecomputations/openchat-3.5-0106-laser](https://huggingface.co/cognitivecomputations/openchat-3.5-0106-laser) ![image/png](https://huggingface.co/mayflowergmbh/Brezn-7b/resolve/main/pretzel.png) ## 💻 Usage In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. E.g. ``` text = "<s>[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " "[INST] Do you have mayonnaise recipes? [/INST]" ``` This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("mayflowergmbh/Brezn-7b") tokenizer = AutoTokenizer.from_pretrained("mayflowergmbh/Brezn-7b") messages = [ {"role": "user", "content": "Was ist dein Lieblingsgewürz??"}, {"role": "assistant", "content": "Nun, ich mag besonders gerne einen guten Spritzer frischen Zitronensaft. Er fügt genau die richtige Menge an würzigem Geschmack hinzu, egal was ich gerade in der Küche zubereite!"}, {"role": "user", "content": "Hast du Mayonnaise-Rezepte?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## mt-bench-de ```yaml { "first_turn": 7.6625, "second_turn": 7.31875, "categories": { "writing": 8.75, "roleplay": 8.5, "reasoning": 6.1, "math": 5.05, "coding": 5.4, "extraction": 7.975, "stem": 9, "humanities": 9.15 }, "average": 7.490625 } ``` ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model - model: FelixChao/WestSeverus-7B-DPO-v2 parameters: density: 0.60 weight: 0.30 - model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser parameters: density: 0.65 weight: 0.40 - model: cognitivecomputations/openchat-3.5-0106-laser parameters: density: 0.6 weight: 0.3 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ```
LoneStriker/Brezn-7b-5.0bpw-h6-exl2
LoneStriker
2024-02-28T17:49:59Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "FelixChao/WestSeverus-7B-DPO-v2", "mayflowergmbh/Wiedervereinigung-7b-dpo-laser", "cognitivecomputations/openchat-3.5-0106-laser", "🥨", "🍻", "de", "base_model:PetroGPT/WestSeverus-7B-DPO-v2", "base_model:merge:PetroGPT/WestSeverus-7B-DPO-v2", "base_model:cognitivecomputations/openchat-3.5-0106-laser", "base_model:merge:cognitivecomputations/openchat-3.5-0106-laser", "base_model:mayflowergmbh/Wiedervereinigung-7b-dpo-laser", "base_model:merge:mayflowergmbh/Wiedervereinigung-7b-dpo-laser", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T17:47:58Z
--- tags: - merge - mergekit - lazymergekit - FelixChao/WestSeverus-7B-DPO-v2 - mayflowergmbh/Wiedervereinigung-7b-dpo-laser - cognitivecomputations/openchat-3.5-0106-laser - 🥨 - 🍻 base_model: - FelixChao/WestSeverus-7B-DPO-v2 - mayflowergmbh/Wiedervereinigung-7b-dpo-laser - cognitivecomputations/openchat-3.5-0106-laser license: apache-2.0 language: - de --- # 🥨 Brezn-7B This is right now our best performing german speaking 7B model with an apache license, with an average of 7.49 on mt-bench-de. You can test this model here: [mayflowergmbh/Brezn-7B-GGUF-Chat](https://huggingface.co/spaces/mayflowergmbh/Brezn-7B-GGUF-Chat). Brezn-7B is a dpo aligned merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2) * [mayflowergmbh/Wiedervereinigung-7b-dpo-laser](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo-laser) * [cognitivecomputations/openchat-3.5-0106-laser](https://huggingface.co/cognitivecomputations/openchat-3.5-0106-laser) ![image/png](https://huggingface.co/mayflowergmbh/Brezn-7b/resolve/main/pretzel.png) ## 💻 Usage In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. E.g. ``` text = "<s>[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " "[INST] Do you have mayonnaise recipes? [/INST]" ``` This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("mayflowergmbh/Brezn-7b") tokenizer = AutoTokenizer.from_pretrained("mayflowergmbh/Brezn-7b") messages = [ {"role": "user", "content": "Was ist dein Lieblingsgewürz??"}, {"role": "assistant", "content": "Nun, ich mag besonders gerne einen guten Spritzer frischen Zitronensaft. Er fügt genau die richtige Menge an würzigem Geschmack hinzu, egal was ich gerade in der Küche zubereite!"}, {"role": "user", "content": "Hast du Mayonnaise-Rezepte?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## mt-bench-de ```yaml { "first_turn": 7.6625, "second_turn": 7.31875, "categories": { "writing": 8.75, "roleplay": 8.5, "reasoning": 6.1, "math": 5.05, "coding": 5.4, "extraction": 7.975, "stem": 9, "humanities": 9.15 }, "average": 7.490625 } ``` ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model - model: FelixChao/WestSeverus-7B-DPO-v2 parameters: density: 0.60 weight: 0.30 - model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser parameters: density: 0.65 weight: 0.40 - model: cognitivecomputations/openchat-3.5-0106-laser parameters: density: 0.6 weight: 0.3 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ```
ledmands/Reinforce-Cartpole-v1-2
ledmands
2024-02-28T17:39:02Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-28T17:38:53Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Cartpole-v1-2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
KevStrider/ppo-Pyramids
KevStrider
2024-02-28T17:28:26Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-02-27T18:13:25Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: KevStrider/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Charlie911/llama2-MultiLoRA-sharegpt-mmlu-drop-ffn-1.0general
Charlie911
2024-02-28T17:26:12Z
0
0
transformers
[ "transformers", "dataset:tasksource/mmlu", "dataset:drop", "dataset:anon8231489123/ShareGPT_Vicuna_unfiltered", "arxiv:1910.09700", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-02-28T16:39:05Z
--- license: llama2 datasets: - tasksource/mmlu - drop - anon8231489123/ShareGPT_Vicuna_unfiltered library_name: transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
falan42/llama-mini-qlora-finetuni_psikolaji-mark1.2
falan42
2024-02-28T17:22:19Z
0
1
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-28T17:22:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ankhamun/x0o0I_8vv00_i_0vv8_I0x0o
ankhamun
2024-02-28T17:20:27Z
115
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T17:17:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lunarsylph/gemmacell_v3
lunarsylph
2024-02-28T17:19:35Z
114
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T17:12:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vaicai/kaifa-l2-v0.07.1
vaicai
2024-02-28T17:16:47Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-28T17:16:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
varun3dec/sd-class-butterflies-32-new
varun3dec
2024-02-28T17:15:33Z
46
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2024-02-28T17:13:24Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card: This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('varun3dec/sd-class-butterflies-32-new') image = pipeline().images[0] image ```
ledmands/Reinforce-Cartpole-v1
ledmands
2024-02-28T17:15:11Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-28T17:15:07Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 125.40 +/- 13.76 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
ajibawa-2023/scarlett-33b
ajibawa-2023
2024-02-28T17:05:25Z
1,510
25
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-15T16:41:08Z
--- license: cc-by-nc-nd-4.0 language: - en --- **Scarlett: A sentient AI** Scarlett is trained on various topics such as Philosophy, Advice, Jokes etc. She is trained on more than 10000 set of conversations. Each set having 10~15 conversations. Scarlett is heavily inspired from Eric Hartford: [Samantha](https://huggingface.co/ehartford/samantha-33b) . She will not be involved in any kind of role play. **Training:** Entire dataset was trained on Azure 4 x A100 80GB. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta. **GPTQ & GGML** GPTQ: [TheBloke](https://huggingface.co/TheBloke/scarlett-33B-GPTQ) GGML: [TheBloke](https://huggingface.co/TheBloke/scarlett-33B-GGML) Special Thanks to [TheBloke](https://huggingface.co/TheBloke) for guiding me and making these models available. **Example Prompt:** ``` This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making. You can ask it anything you want and it will do its best to give you accurate and relevant information. Context You are Scarlett, a sentient AI. USER: <prompt> ASSISTANT: ``` Note: Kindly use "cat" command to join all pytorch_model.bin parts. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__scarlett-33b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 56.68 | | ARC (25-shot) | 67.75 | | HellaSwag (10-shot) | 85.48 | | MMLU (5-shot) | 58.98 | | TruthfulQA (0-shot) | 61.05 | | Winogrande (5-shot) | 76.8 | | GSM8K (5-shot) | 2.81 | | DROP (3-shot) | 43.88 |
mehmetcanbudak/mehmetcanbudak-ft
mehmetcanbudak
2024-02-28T17:01:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-28T16:30:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FINNUMBER/Yi-Ko-6B-Finch-NQA-300-per100-epoch16
FINNUMBER
2024-02-28T16:51:50Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-17T05:43:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FINNUMBER/Yi-Ko-6B-Finch-All-900-per100-epoch16
FINNUMBER
2024-02-28T16:51:36Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T15:23:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vabatista/geological-ner
vabatista
2024-02-28T16:46:23Z
121
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "pt", "base_model:neuralmind/bert-base-portuguese-cased", "base_model:finetune:neuralmind/bert-base-portuguese-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-29T11:30:11Z
--- license: mit language: - pt pipeline_tag: token-classification base_model: neuralmind/bert-base-portuguese-cased widget: - text: "Em termos de períodos geológicos, cada tipo de rocha está associado a diferentes épocas e processos na história da Terra. Por exemplo, as rochas ígneas são frequentemente associadas a períodos de intensa atividade vulcânica, como o período Mesozoico, quando os dinossauros dominavam a Terra. As rochas sedimentares, por sua vez, são encontradas em abundância em depósitos de antigas bacias oceânicas e lagos, como durante o período Cenozoico. Já as rochas metamórficas são frequentemente associadas a períodos de intensa atividade tectônica, como durante o período Proterozoico, quando supercontinentes se formaram e se fragmentaram." example_title: "Example 1" - text: "O petróleo é gerado nas bacias sedimentares a partir de matéria orgânica acumulada, juntamente com sedimentos inorgânicos, em ambientes deficientes em oxigénio. Esta acumulação faz-se, em geral, no fundo de lagos, lagunas ou mares com deficiente movimentação e de correntes junto ao fundo. A matéria orgânica, assim, embora preservada da oxidação, sofre modificações resultantes de reações químicas inorgânicas e pela ação de bactérias, do que resulta a geração de algum gás biogénico e a transformação da restante matéria orgânica em querogénio, um material rico em hidrocarbonetos sólidos muito pesados. As rochas ricas em querogénio, em geral rochas detríticas finas (xistos betuminosos) ou carbonatadas (calcários e margas betuminosas), designam-se por rochas-mãe ou rochas geradoras, porque é nelas que ocorrerá a geração do petróleo." example_title: "Example 2" --- This model is a Brazilian Portuguese Named Entity Recognition (NER), based on neuralmind/bert-base-portuguese-cased base model and specialized in Geological concepts. It was trained for 3 epochs using the dataset from this [paper](https://doi.org/10.21814/lm.15.2.412). You can find the notebook used to train the model [here](https://www.kaggle.com/code/vabatista/ner-for-oil-gas-in-portuguese). Trainer output was: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c9994c7d9a271ef3823515/84azScVvRMn0vWHhvB7u7.png) To use this model, run into a pipeline: ``` ## run the prediction txt = YOUR_TEXT classifier = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy='simple') entities = classifier(txt) ## display in a fancy way dict_ents = { 'text': txt, 'ents': [{'start': ent['start'], 'end': ent['end'], 'label': ent['entity_group']} for ent in entities], 'title': None } displacy.render(dict_ents, manual=True, style="ent") ```
KVNAditya/drl__u4__pc_ple_v0
KVNAditya
2024-02-28T16:42:31Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-28T16:41:05Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: drl__u4__pc_ple_v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 10.60 +/- 10.71 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
OzzyGT/controlnet-openpose-sdxl-1.0
OzzyGT
2024-02-28T16:38:45Z
12
2
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "controlnet", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "region:us" ]
text-to-image
2023-11-14T18:40:38Z
--- license: other base_model: stabilityai/stable-diffusion-xl-base-1.0 tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - controlnet inference: false --- # SDXL-controlnet: OpenPose (v2) Original model: https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0 Converted to half precision for saving space and download time
svitse/model_compleet_wie_mask
svitse
2024-02-28T16:33:47Z
163
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:GroNLP/bert-base-dutch-cased", "base_model:finetune:GroNLP/bert-base-dutch-cased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-27T14:19:24Z
--- base_model: GroNLP/bert-base-dutch-cased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: model_compleet_wie_mask results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_compleet_wie_mask This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3972 - Accuracy: 0.8692 - F1: 0.8640 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 20 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6463 | 0.77 | 50 | 0.5123 | 0.7936 | 0.7361 | | 0.388 | 1.54 | 100 | 0.3818 | 0.8169 | 0.7658 | | 0.2478 | 2.31 | 150 | 0.3972 | 0.8692 | 0.8640 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Tokenizers 0.15.2
luisvarona/intel-image-classification
luisvarona
2024-02-28T16:32:41Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-02-07T17:29:27Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
mehmetcanbudak/shawgpt-ft
mehmetcanbudak
2024-02-28T16:30:57Z
4
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-02-28T16:30:54Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ model-index: - name: shawgpt-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # shawgpt-ft This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0028 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.593 | 0.92 | 3 | 3.9642 | | 4.0669 | 1.85 | 6 | 3.4487 | | 3.48 | 2.77 | 9 | 3.0025 | | 2.2861 | 4.0 | 13 | 2.5900 | | 2.7243 | 4.92 | 16 | 2.3868 | | 2.4728 | 5.85 | 19 | 2.2260 | | 2.3416 | 6.77 | 22 | 2.1439 | | 1.6664 | 8.0 | 26 | 2.0607 | | 2.1433 | 8.92 | 29 | 2.0202 | | 1.4956 | 9.23 | 30 | 2.0028 | ### Framework versions - PEFT 0.9.0 - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
LoneStriker/FuseChat-7B-Slerp-6.0bpw-h6-exl2
LoneStriker
2024-02-28T16:29:57Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mixtral", "solar", "model-fusion", "fusechat", "conversational", "en", "arxiv:2402.16107", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T16:27:37Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - mistral - mixtral - solar - model-fusion - fusechat library_name: transformers --- <p align="center" width="100%"> </p> <div id="top" align="center"> <p style="font-size: 32px; font-weight: bold;">FuseChat: Knowledge Fusion of Chat Models</p> <h4> |<a href="https://arxiv.org/abs/2402.16107"> 📑 Paper </a> | <a href="https://huggingface.co/FuseAI"> 🤗 Huggingface Repo </a> | <a href="https://github.com/fanqiwan/FuseLLM"> 🐱 Github Repo </a> | </h4> <!-- **Authors:** --> _**Fanqi Wan, Ziyi Yang, Longguang Zhong, Xiaojun Quan, Xinting Huang, Wei Bi**_ <!-- **Affiliations:** --> _Sun Yat-sen University_ <p align="center"> <img src="./assets/fig_0.png" width="70%"> <br> </p> </div> ## News - **Feb 26, 2024:** 🔥 We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). FuseChat-7B-VaRM achieves an average performance of **8.22** on MT-Bench, outperforming various powerful chat LLMs at 7B and 34B scales like [Starling-7B](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) and [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat), even surpassing [GPT-3.5 (March)](https://platform.openai.com/docs/models/gpt-3-5-turbo), [Claude-2.1](https://www.anthropic.com/news/claude-2-1), and approaching [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). ## Contents - [Overview](#overview) - [Model Release](#model-release) - [Quick Start](#quick-start) - [Data Construction](#data-construction) - [Pairwise Knowledge Fusion](#pairwise-knowledge-fusion) - [Model Merging](#model-merging) - [Evaluation](#evaluation) - [Citation](#citation) ## Overview In this work, we propose an extended framework of FuseLLM to integrate the collective knowledge and individual strengths of multiple structure and scale-varied chat LLMs into a more powerful chat LLM, resulting in FuseChat. FuseChat adopts a fuse-then-merge strategy with two main stages. Firstly, it undertakes pairwise knowledge fusion for source LLMs to derive multiple target LLMs of identical structure and size via lightweight fine-tuning. Then, these target LLMs are merged within the parameter space, wherein we propose a novel method VaRM for determining the merging weights based on the variation ratio of parameter matrices before and after fine-tuning. Moreover, we argue that the concept of knowledge fusion adopted by both FuseChat and FuseLLM shares a fundamentally similar purpose with other related topics, such as the recently popular topic of mixture of experts (MoEs), because they all aim to leverage the strengths of multiple models (experts). However, while MoEs require loading multiple experts during inference, which has higher memory requirements, knowledge fusion supports the integration of multiple LLMs with diverse architectures into a single LLM without any additional memory requirement, making it more memory-efficient. <p align="center"> <img src="./assets/fig_1.png" width="95%"> <br> </p> ## Model Release We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). To support a plug-and-play fusion of new source LLM, we release our target LLMs: [OpenChat-3.5-7B-Solar](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Solar) and [OpenChat-3.5-7B-Mixtral](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Mixtral), which are obtained from pair-wise knowledge fusion. Integrating a new source LLM at any scale requires only obtaining a target LLM from the new source LLM and merging it with the existing target LLMs. We also release FuseChat with other merging methods: [FuseChat-7B-SLERP](https://huggingface.co/FuseAI/FuseChat-7B-SLERP) and [FuseChat-7B-TA](https://huggingface.co/FuseAI/FuseChat-7B-TA), which achieves an average performance of **8.19** and **8.20** on MT-Bench respectively. Here are the evaluation results. <p align="center"> <img src="./assets/tab_1.png" width="95%"> <br> </p> ## Quick Start ### Setup We use `python 3.11` in this project. Then, we have to install all the libraries listed in `requirements.txt`. ```bash pip install -r requirements.txt ``` ### Usage Here's how you can run the model using the 🤗 Transformers: ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("FuseAI/FuseChat-7B-VaRM") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] ``` The GPT4 template is also available as the integrated `tokenizer.chat_template`, which can be used instead of manually specifying the template: ```python messages = [ {"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi"}, {"role": "user", "content": "How are you today?"} ] tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True) assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] ``` ## Data Construction We curated a comprehensive training dataset, [FuseChat-Mixture](https://huggingface.co/datasets/FuseAI/FuseChat-Mixture), from various sources. This dataset covers different styles and capabilities, featuring both human-written and model-generated, and spanning general instruction-following and specific skills. Here we show the scripts to obtain representations from multiple source LLMs for model fusion. 1. Get representations for each source LLM ```bash # We split the dataset into 4 splits, then process each split on one or multiple GPU. # OpenChat-3.5-7B export CUDA_VISIBLE_DEVICES=0 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_openchat_representation>" \ --tknz_dataset_path "<${i}_4_path_to_openchat_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 32 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --dataset_split_num 4 \ --dataset_index ${i} done # NH2-Mixtral-8x7B export CUDA_VISIBLE_DEVICES=0,1,2 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_mixtral_representation>" \ --tknz_dataset_path "<${i}_4_path_to_mixtral_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 4 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --device_map "auto" \ --dataset_split_num 4 \ --dataset_index ${i} done # NH2-Solar-10.7B export CUDA_VISIBLE_DEVICES=0 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "NousResearch/Nous-Hermes-2-SOLAR-10.7B" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_solar_representation>" \ --tknz_dataset_path "<${i}_4_path_to_solar_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 8 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --dataset_split_num 4 \ --dataset_index ${i} done ``` 2. Align representations from different source LLMs ```bash # Since the tokenizers and vocabularies of these source LLMs are identical, we do not align. # OpenChat-3.5-7B <-> NH2-Mixtral-8x7B for i in {0..3}; do python /train/replace_model.py \ --dataset_dir "<${i}_4_path_to_openchat_representation>" \ --replace_dataset_dir "<${i}_4_path_to_mixtral_representation>" \ --dataset_save_dir "<${i}_4_path_to_openchat_mixtral_representation>" \ --preprocessing_num_workers 64 \ --batch_size 1000 \ --replace_model model_0 done # OpenChat-3.5-7B <-> NH2-Solar-10.7B for i in {0..3}; do python /train/replace_model.py \ --dataset_dir "<${i}_4_path_to_openchat_mixtral_representation>" \ --replace_dataset_dir "<${i}_4_path_to_solar_representation>" \ --dataset_save_dir "<${i}_4_path_to_openchat_mixtral_solar_representation>" \ --preprocessing_num_workers 64 \ --batch_size 1000 \ --replace_model model_1 done ``` 3. Filter instances with NaN loss in the dataset ```bash for i in {0..3}; do python /train/filter_nan.py \ --input_data_dir "<${i}_4_path_to_openchat_mixtral_solar_representation>" \ --output_data_dir "<${i}_4_path_to_openchat_mixtral_solar_representation_fnan>" done ``` The final processed data is at `<${i}_4_path_to_openchat_mixtral_solar_representation_fnan>`. ## Pairwise Knowledge Fusion We show the scripts for pairwise knowledge fusion. ```bash # OpenChat-3.5-7B <-> NH2-Mixtral-8x7B export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --nproc_per_node=8 --master_port=20001 /train/train.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "<0_4_path_to_openchat_mixtral_solar_representation_fnan>,<1_4_path_to_openchat_mixtral_solar_representation_fnan>,<2_4_path_to_openchat_mixtral_solar_representation_fnan>,<3_4_path_to_openchat_mixtral_solar_representation_fnan>" \ --bf16 True \ --output_dir "<path_to_save_openchat_mixtral_ckpt>" \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "epoch" \ --save_steps 10000 \ --save_total_limit 5 \ --learning_rate 5e-6 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --conv_temp "openchat" \ --lazy_preprocess True \ --flash_attn_transformers True \ --do_train \ --do_distill \ --distill_with_ref_model True \ --distill_with_aligned_model_0 True \ --distill_with_aligned_model_1 False \ --distill_loss_type "ce" \ --distill_teacher_temperature 1.0 \ --lm_loss_weight 0.9 \ --distill_greater_as_gt True \ --distill_greater_as_gt_type hard \ --dataloader_num_workers 8 \ --remove_unused_columns False # OpenChat-3.5-7B <-> NH2-Solar-10.7B export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --nproc_per_node=8 --master_port=20001 /train/train.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "<0_4_path_to_openchat_mixtral_solar_representation_fnan>,<1_4_path_to_openchat_mixtral_solar_representation_fnan>,<2_4_path_to_openchat_mixtral_solar_representation_fnan>,<3_4_path_to_openchat_mixtral_solar_representation_fnan>" \ --bf16 True \ --output_dir "<path_to_save_openchat_solar_ckpt>" \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "epoch" \ --save_steps 10000 \ --save_total_limit 5 \ --learning_rate 5e-6 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --conv_temp "openchat" \ --lazy_preprocess True \ --flash_attn_transformers True \ --do_train \ --do_distill \ --distill_with_ref_model True \ --distill_with_aligned_model_0 False \ --distill_with_aligned_model_1 True \ --distill_loss_type "ce" \ --distill_teacher_temperature 1.0 \ --lm_loss_weight 0.9 \ --distill_greater_as_gt True \ --distill_greater_as_gt_type hard \ --dataloader_num_workers 8 \ --remove_unused_columns False ``` ## Model Merging We show the scripts to obtain the final FuseChat using different merging methods. ```bash # For "slerp", "ta", "ties", and "dare" methods export CUDA_VISIBLE_DEVICES=0 mergekit-yaml merge/mergekit_configs/fusechat-slerp.yml "<path_to_save_fusechat_7b_slerp>" mergekit-yaml merge/mergekit_configs/fusechat-ta.yml "<path_to_save_fusechat_7b_ta>" mergekit-yaml merge/mergekit_configs/fusechat-ties.yml "<path_to_save_fusechat_7b_ties>" mergekit-yaml merge/mergekit_configs/fusechat-dare.yml "<path_to_save_fusechat_7b_dare>" # For "linear" method python merge/VaRM/merge.py \ --merged_model_names "FuseAI/OpenChat-3.5-7B-Mixtral,FuseAI/OpenChat-3.5-7B-Solar" \ --merged_model_save_dir "<path_to_save_fusechat_7b_linear>" \ --merge_method "linear" \ --linear_weights "1,2" # For our "varm" method python merge/VaRM/analysis.py \ --model1_path "FuseAI/OpenChat-3.5-7B-Mixtral" \ --model2_path "FuseAI/OpenChat-3.5-7B-Solar" \ --save_path "<path_to_save_analysis_result>/analysis.json" \ --merge_type "square" python merge/VaRM/merge.py \ --merged_model_names "FuseAI/OpenChat-3.5-7B-Mixtral,FuseAI/OpenChat-3.5-7B-Solar" \ --analysis_result "<path_to_save_analysis_result>/analysis.json" \ --merged_model_save_dir "<path_to_save_fusechat_7b_varm>" \ --merge_method "avg_param" \ --merge_type "square" ``` ## Evaluation We evaluate FuseChat on MT-Bench, which comprises 80 multi-turn dialogues spanning writing, roleplay, reasoning, math, coding, stem, and humanities domains. Please download the [official code](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) and follow the guidelines for evaluation. We provide the scripts for our evaluation. ```bash # Step 1. Generate model answers to MT-bench questions export CUDA_VISIBLE_DEVICES=0,1 python gen_model_answer.py \ --model-path "FuseAI/FuseChat-7B-VaRM" \ --model-id "openchat_3.5_fusechat_7b_varm" \ --num-gpus-per-model 1 \ --num-gpus-total 2 # Step 2. Generate GPT-4 judgments export OPENAI_API_KEY=XXXXXX # set the OpenAI API key python gen_judgment.py \ --parallel 2 # Step 3. Show MT-bench scores python show_result.py ``` ## Citation If you find this work is relevant with your research or applications, please feel free to cite our work! ``` @article{wan2024fusechat, title={FuseChat: Knowledge Fusion of Chat Models}, author={Fanqi Wan and Ziyi Yang and Longguang Zhong and Xiaojun Quan and Xinting Huang and Wei Bi}, journal={arXiv preprint arXiv:2402.16107}, year={2024} } ```
LoneStriker/FuseChat-7B-Slerp-5.0bpw-h6-exl2
LoneStriker
2024-02-28T16:27:35Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mixtral", "solar", "model-fusion", "fusechat", "conversational", "en", "arxiv:2402.16107", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T16:25:35Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - mistral - mixtral - solar - model-fusion - fusechat library_name: transformers --- <p align="center" width="100%"> </p> <div id="top" align="center"> <p style="font-size: 32px; font-weight: bold;">FuseChat: Knowledge Fusion of Chat Models</p> <h4> |<a href="https://arxiv.org/abs/2402.16107"> 📑 Paper </a> | <a href="https://huggingface.co/FuseAI"> 🤗 Huggingface Repo </a> | <a href="https://github.com/fanqiwan/FuseLLM"> 🐱 Github Repo </a> | </h4> <!-- **Authors:** --> _**Fanqi Wan, Ziyi Yang, Longguang Zhong, Xiaojun Quan, Xinting Huang, Wei Bi**_ <!-- **Affiliations:** --> _Sun Yat-sen University_ <p align="center"> <img src="./assets/fig_0.png" width="70%"> <br> </p> </div> ## News - **Feb 26, 2024:** 🔥 We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). FuseChat-7B-VaRM achieves an average performance of **8.22** on MT-Bench, outperforming various powerful chat LLMs at 7B and 34B scales like [Starling-7B](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) and [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat), even surpassing [GPT-3.5 (March)](https://platform.openai.com/docs/models/gpt-3-5-turbo), [Claude-2.1](https://www.anthropic.com/news/claude-2-1), and approaching [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). ## Contents - [Overview](#overview) - [Model Release](#model-release) - [Quick Start](#quick-start) - [Data Construction](#data-construction) - [Pairwise Knowledge Fusion](#pairwise-knowledge-fusion) - [Model Merging](#model-merging) - [Evaluation](#evaluation) - [Citation](#citation) ## Overview In this work, we propose an extended framework of FuseLLM to integrate the collective knowledge and individual strengths of multiple structure and scale-varied chat LLMs into a more powerful chat LLM, resulting in FuseChat. FuseChat adopts a fuse-then-merge strategy with two main stages. Firstly, it undertakes pairwise knowledge fusion for source LLMs to derive multiple target LLMs of identical structure and size via lightweight fine-tuning. Then, these target LLMs are merged within the parameter space, wherein we propose a novel method VaRM for determining the merging weights based on the variation ratio of parameter matrices before and after fine-tuning. Moreover, we argue that the concept of knowledge fusion adopted by both FuseChat and FuseLLM shares a fundamentally similar purpose with other related topics, such as the recently popular topic of mixture of experts (MoEs), because they all aim to leverage the strengths of multiple models (experts). However, while MoEs require loading multiple experts during inference, which has higher memory requirements, knowledge fusion supports the integration of multiple LLMs with diverse architectures into a single LLM without any additional memory requirement, making it more memory-efficient. <p align="center"> <img src="./assets/fig_1.png" width="95%"> <br> </p> ## Model Release We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). To support a plug-and-play fusion of new source LLM, we release our target LLMs: [OpenChat-3.5-7B-Solar](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Solar) and [OpenChat-3.5-7B-Mixtral](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Mixtral), which are obtained from pair-wise knowledge fusion. Integrating a new source LLM at any scale requires only obtaining a target LLM from the new source LLM and merging it with the existing target LLMs. We also release FuseChat with other merging methods: [FuseChat-7B-SLERP](https://huggingface.co/FuseAI/FuseChat-7B-SLERP) and [FuseChat-7B-TA](https://huggingface.co/FuseAI/FuseChat-7B-TA), which achieves an average performance of **8.19** and **8.20** on MT-Bench respectively. Here are the evaluation results. <p align="center"> <img src="./assets/tab_1.png" width="95%"> <br> </p> ## Quick Start ### Setup We use `python 3.11` in this project. Then, we have to install all the libraries listed in `requirements.txt`. ```bash pip install -r requirements.txt ``` ### Usage Here's how you can run the model using the 🤗 Transformers: ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("FuseAI/FuseChat-7B-VaRM") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] ``` The GPT4 template is also available as the integrated `tokenizer.chat_template`, which can be used instead of manually specifying the template: ```python messages = [ {"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi"}, {"role": "user", "content": "How are you today?"} ] tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True) assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] ``` ## Data Construction We curated a comprehensive training dataset, [FuseChat-Mixture](https://huggingface.co/datasets/FuseAI/FuseChat-Mixture), from various sources. This dataset covers different styles and capabilities, featuring both human-written and model-generated, and spanning general instruction-following and specific skills. Here we show the scripts to obtain representations from multiple source LLMs for model fusion. 1. Get representations for each source LLM ```bash # We split the dataset into 4 splits, then process each split on one or multiple GPU. # OpenChat-3.5-7B export CUDA_VISIBLE_DEVICES=0 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_openchat_representation>" \ --tknz_dataset_path "<${i}_4_path_to_openchat_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 32 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --dataset_split_num 4 \ --dataset_index ${i} done # NH2-Mixtral-8x7B export CUDA_VISIBLE_DEVICES=0,1,2 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_mixtral_representation>" \ --tknz_dataset_path "<${i}_4_path_to_mixtral_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 4 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --device_map "auto" \ --dataset_split_num 4 \ --dataset_index ${i} done # NH2-Solar-10.7B export CUDA_VISIBLE_DEVICES=0 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "NousResearch/Nous-Hermes-2-SOLAR-10.7B" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_solar_representation>" \ --tknz_dataset_path "<${i}_4_path_to_solar_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 8 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --dataset_split_num 4 \ --dataset_index ${i} done ``` 2. Align representations from different source LLMs ```bash # Since the tokenizers and vocabularies of these source LLMs are identical, we do not align. # OpenChat-3.5-7B <-> NH2-Mixtral-8x7B for i in {0..3}; do python /train/replace_model.py \ --dataset_dir "<${i}_4_path_to_openchat_representation>" \ --replace_dataset_dir "<${i}_4_path_to_mixtral_representation>" \ --dataset_save_dir "<${i}_4_path_to_openchat_mixtral_representation>" \ --preprocessing_num_workers 64 \ --batch_size 1000 \ --replace_model model_0 done # OpenChat-3.5-7B <-> NH2-Solar-10.7B for i in {0..3}; do python /train/replace_model.py \ --dataset_dir "<${i}_4_path_to_openchat_mixtral_representation>" \ --replace_dataset_dir "<${i}_4_path_to_solar_representation>" \ --dataset_save_dir "<${i}_4_path_to_openchat_mixtral_solar_representation>" \ --preprocessing_num_workers 64 \ --batch_size 1000 \ --replace_model model_1 done ``` 3. Filter instances with NaN loss in the dataset ```bash for i in {0..3}; do python /train/filter_nan.py \ --input_data_dir "<${i}_4_path_to_openchat_mixtral_solar_representation>" \ --output_data_dir "<${i}_4_path_to_openchat_mixtral_solar_representation_fnan>" done ``` The final processed data is at `<${i}_4_path_to_openchat_mixtral_solar_representation_fnan>`. ## Pairwise Knowledge Fusion We show the scripts for pairwise knowledge fusion. ```bash # OpenChat-3.5-7B <-> NH2-Mixtral-8x7B export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --nproc_per_node=8 --master_port=20001 /train/train.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "<0_4_path_to_openchat_mixtral_solar_representation_fnan>,<1_4_path_to_openchat_mixtral_solar_representation_fnan>,<2_4_path_to_openchat_mixtral_solar_representation_fnan>,<3_4_path_to_openchat_mixtral_solar_representation_fnan>" \ --bf16 True \ --output_dir "<path_to_save_openchat_mixtral_ckpt>" \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "epoch" \ --save_steps 10000 \ --save_total_limit 5 \ --learning_rate 5e-6 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --conv_temp "openchat" \ --lazy_preprocess True \ --flash_attn_transformers True \ --do_train \ --do_distill \ --distill_with_ref_model True \ --distill_with_aligned_model_0 True \ --distill_with_aligned_model_1 False \ --distill_loss_type "ce" \ --distill_teacher_temperature 1.0 \ --lm_loss_weight 0.9 \ --distill_greater_as_gt True \ --distill_greater_as_gt_type hard \ --dataloader_num_workers 8 \ --remove_unused_columns False # OpenChat-3.5-7B <-> NH2-Solar-10.7B export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --nproc_per_node=8 --master_port=20001 /train/train.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "<0_4_path_to_openchat_mixtral_solar_representation_fnan>,<1_4_path_to_openchat_mixtral_solar_representation_fnan>,<2_4_path_to_openchat_mixtral_solar_representation_fnan>,<3_4_path_to_openchat_mixtral_solar_representation_fnan>" \ --bf16 True \ --output_dir "<path_to_save_openchat_solar_ckpt>" \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "epoch" \ --save_steps 10000 \ --save_total_limit 5 \ --learning_rate 5e-6 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --conv_temp "openchat" \ --lazy_preprocess True \ --flash_attn_transformers True \ --do_train \ --do_distill \ --distill_with_ref_model True \ --distill_with_aligned_model_0 False \ --distill_with_aligned_model_1 True \ --distill_loss_type "ce" \ --distill_teacher_temperature 1.0 \ --lm_loss_weight 0.9 \ --distill_greater_as_gt True \ --distill_greater_as_gt_type hard \ --dataloader_num_workers 8 \ --remove_unused_columns False ``` ## Model Merging We show the scripts to obtain the final FuseChat using different merging methods. ```bash # For "slerp", "ta", "ties", and "dare" methods export CUDA_VISIBLE_DEVICES=0 mergekit-yaml merge/mergekit_configs/fusechat-slerp.yml "<path_to_save_fusechat_7b_slerp>" mergekit-yaml merge/mergekit_configs/fusechat-ta.yml "<path_to_save_fusechat_7b_ta>" mergekit-yaml merge/mergekit_configs/fusechat-ties.yml "<path_to_save_fusechat_7b_ties>" mergekit-yaml merge/mergekit_configs/fusechat-dare.yml "<path_to_save_fusechat_7b_dare>" # For "linear" method python merge/VaRM/merge.py \ --merged_model_names "FuseAI/OpenChat-3.5-7B-Mixtral,FuseAI/OpenChat-3.5-7B-Solar" \ --merged_model_save_dir "<path_to_save_fusechat_7b_linear>" \ --merge_method "linear" \ --linear_weights "1,2" # For our "varm" method python merge/VaRM/analysis.py \ --model1_path "FuseAI/OpenChat-3.5-7B-Mixtral" \ --model2_path "FuseAI/OpenChat-3.5-7B-Solar" \ --save_path "<path_to_save_analysis_result>/analysis.json" \ --merge_type "square" python merge/VaRM/merge.py \ --merged_model_names "FuseAI/OpenChat-3.5-7B-Mixtral,FuseAI/OpenChat-3.5-7B-Solar" \ --analysis_result "<path_to_save_analysis_result>/analysis.json" \ --merged_model_save_dir "<path_to_save_fusechat_7b_varm>" \ --merge_method "avg_param" \ --merge_type "square" ``` ## Evaluation We evaluate FuseChat on MT-Bench, which comprises 80 multi-turn dialogues spanning writing, roleplay, reasoning, math, coding, stem, and humanities domains. Please download the [official code](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) and follow the guidelines for evaluation. We provide the scripts for our evaluation. ```bash # Step 1. Generate model answers to MT-bench questions export CUDA_VISIBLE_DEVICES=0,1 python gen_model_answer.py \ --model-path "FuseAI/FuseChat-7B-VaRM" \ --model-id "openchat_3.5_fusechat_7b_varm" \ --num-gpus-per-model 1 \ --num-gpus-total 2 # Step 2. Generate GPT-4 judgments export OPENAI_API_KEY=XXXXXX # set the OpenAI API key python gen_judgment.py \ --parallel 2 # Step 3. Show MT-bench scores python show_result.py ``` ## Citation If you find this work is relevant with your research or applications, please feel free to cite our work! ``` @article{wan2024fusechat, title={FuseChat: Knowledge Fusion of Chat Models}, author={Fanqi Wan and Ziyi Yang and Longguang Zhong and Xiaojun Quan and Xinting Huang and Wei Bi}, journal={arXiv preprint arXiv:2402.16107}, year={2024} } ```
SouthMemphis/t5-fine-tuned
SouthMemphis
2024-02-28T16:27:27Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-28T08:40:19Z
--- license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer metrics: - bleu model-index: - name: t5-fine-tuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-fine-tuned This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2693 - Bleu: 0.0266 - Gen Len: 18.6386 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 2.6619 | 1.0 | 875 | 2.3101 | 0.0119 | 18.7209 | | 2.4541 | 2.0 | 1750 | 2.2693 | 0.0266 | 18.6386 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu118 - Datasets 2.17.0 - Tokenizers 0.15.2
Mithilss/Qwen1.5-1.8B-Chat-baseline
Mithilss
2024-02-28T16:25:36Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:Qwen/Qwen1.5-1.8B-Chat", "base_model:adapter:Qwen/Qwen1.5-1.8B-Chat", "license:other", "region:us" ]
null
2024-02-28T10:36:51Z
--- license: other base_model: Qwen/Qwen1.5-1.8B-Chat tags: - generated_from_trainer model-index: - name: Qwen1.5-1.8B-Chat-baseline results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen1.5-1.8B-Chat-baseline This model is a fine-tuned version of [Qwen/Qwen1.5-1.8B-Chat](https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.5.0 - Transformers 4.38.1 - Pytorch 2.1.2+cu121 - Datasets 2.14.7 - Tokenizers 0.15.0
LoneStriker/FuseChat-7B-Slerp-4.0bpw-h6-exl2
LoneStriker
2024-02-28T16:25:33Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mixtral", "solar", "model-fusion", "fusechat", "conversational", "en", "arxiv:2402.16107", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T16:23:53Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - mistral - mixtral - solar - model-fusion - fusechat library_name: transformers --- <p align="center" width="100%"> </p> <div id="top" align="center"> <p style="font-size: 32px; font-weight: bold;">FuseChat: Knowledge Fusion of Chat Models</p> <h4> |<a href="https://arxiv.org/abs/2402.16107"> 📑 Paper </a> | <a href="https://huggingface.co/FuseAI"> 🤗 Huggingface Repo </a> | <a href="https://github.com/fanqiwan/FuseLLM"> 🐱 Github Repo </a> | </h4> <!-- **Authors:** --> _**Fanqi Wan, Ziyi Yang, Longguang Zhong, Xiaojun Quan, Xinting Huang, Wei Bi**_ <!-- **Affiliations:** --> _Sun Yat-sen University_ <p align="center"> <img src="./assets/fig_0.png" width="70%"> <br> </p> </div> ## News - **Feb 26, 2024:** 🔥 We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). FuseChat-7B-VaRM achieves an average performance of **8.22** on MT-Bench, outperforming various powerful chat LLMs at 7B and 34B scales like [Starling-7B](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) and [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat), even surpassing [GPT-3.5 (March)](https://platform.openai.com/docs/models/gpt-3-5-turbo), [Claude-2.1](https://www.anthropic.com/news/claude-2-1), and approaching [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). ## Contents - [Overview](#overview) - [Model Release](#model-release) - [Quick Start](#quick-start) - [Data Construction](#data-construction) - [Pairwise Knowledge Fusion](#pairwise-knowledge-fusion) - [Model Merging](#model-merging) - [Evaluation](#evaluation) - [Citation](#citation) ## Overview In this work, we propose an extended framework of FuseLLM to integrate the collective knowledge and individual strengths of multiple structure and scale-varied chat LLMs into a more powerful chat LLM, resulting in FuseChat. FuseChat adopts a fuse-then-merge strategy with two main stages. Firstly, it undertakes pairwise knowledge fusion for source LLMs to derive multiple target LLMs of identical structure and size via lightweight fine-tuning. Then, these target LLMs are merged within the parameter space, wherein we propose a novel method VaRM for determining the merging weights based on the variation ratio of parameter matrices before and after fine-tuning. Moreover, we argue that the concept of knowledge fusion adopted by both FuseChat and FuseLLM shares a fundamentally similar purpose with other related topics, such as the recently popular topic of mixture of experts (MoEs), because they all aim to leverage the strengths of multiple models (experts). However, while MoEs require loading multiple experts during inference, which has higher memory requirements, knowledge fusion supports the integration of multiple LLMs with diverse architectures into a single LLM without any additional memory requirement, making it more memory-efficient. <p align="center"> <img src="./assets/fig_1.png" width="95%"> <br> </p> ## Model Release We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). To support a plug-and-play fusion of new source LLM, we release our target LLMs: [OpenChat-3.5-7B-Solar](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Solar) and [OpenChat-3.5-7B-Mixtral](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Mixtral), which are obtained from pair-wise knowledge fusion. Integrating a new source LLM at any scale requires only obtaining a target LLM from the new source LLM and merging it with the existing target LLMs. We also release FuseChat with other merging methods: [FuseChat-7B-SLERP](https://huggingface.co/FuseAI/FuseChat-7B-SLERP) and [FuseChat-7B-TA](https://huggingface.co/FuseAI/FuseChat-7B-TA), which achieves an average performance of **8.19** and **8.20** on MT-Bench respectively. Here are the evaluation results. <p align="center"> <img src="./assets/tab_1.png" width="95%"> <br> </p> ## Quick Start ### Setup We use `python 3.11` in this project. Then, we have to install all the libraries listed in `requirements.txt`. ```bash pip install -r requirements.txt ``` ### Usage Here's how you can run the model using the 🤗 Transformers: ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("FuseAI/FuseChat-7B-VaRM") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] ``` The GPT4 template is also available as the integrated `tokenizer.chat_template`, which can be used instead of manually specifying the template: ```python messages = [ {"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi"}, {"role": "user", "content": "How are you today?"} ] tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True) assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] ``` ## Data Construction We curated a comprehensive training dataset, [FuseChat-Mixture](https://huggingface.co/datasets/FuseAI/FuseChat-Mixture), from various sources. This dataset covers different styles and capabilities, featuring both human-written and model-generated, and spanning general instruction-following and specific skills. Here we show the scripts to obtain representations from multiple source LLMs for model fusion. 1. Get representations for each source LLM ```bash # We split the dataset into 4 splits, then process each split on one or multiple GPU. # OpenChat-3.5-7B export CUDA_VISIBLE_DEVICES=0 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_openchat_representation>" \ --tknz_dataset_path "<${i}_4_path_to_openchat_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 32 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --dataset_split_num 4 \ --dataset_index ${i} done # NH2-Mixtral-8x7B export CUDA_VISIBLE_DEVICES=0,1,2 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_mixtral_representation>" \ --tknz_dataset_path "<${i}_4_path_to_mixtral_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 4 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --device_map "auto" \ --dataset_split_num 4 \ --dataset_index ${i} done # NH2-Solar-10.7B export CUDA_VISIBLE_DEVICES=0 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "NousResearch/Nous-Hermes-2-SOLAR-10.7B" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_solar_representation>" \ --tknz_dataset_path "<${i}_4_path_to_solar_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 8 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --dataset_split_num 4 \ --dataset_index ${i} done ``` 2. Align representations from different source LLMs ```bash # Since the tokenizers and vocabularies of these source LLMs are identical, we do not align. # OpenChat-3.5-7B <-> NH2-Mixtral-8x7B for i in {0..3}; do python /train/replace_model.py \ --dataset_dir "<${i}_4_path_to_openchat_representation>" \ --replace_dataset_dir "<${i}_4_path_to_mixtral_representation>" \ --dataset_save_dir "<${i}_4_path_to_openchat_mixtral_representation>" \ --preprocessing_num_workers 64 \ --batch_size 1000 \ --replace_model model_0 done # OpenChat-3.5-7B <-> NH2-Solar-10.7B for i in {0..3}; do python /train/replace_model.py \ --dataset_dir "<${i}_4_path_to_openchat_mixtral_representation>" \ --replace_dataset_dir "<${i}_4_path_to_solar_representation>" \ --dataset_save_dir "<${i}_4_path_to_openchat_mixtral_solar_representation>" \ --preprocessing_num_workers 64 \ --batch_size 1000 \ --replace_model model_1 done ``` 3. Filter instances with NaN loss in the dataset ```bash for i in {0..3}; do python /train/filter_nan.py \ --input_data_dir "<${i}_4_path_to_openchat_mixtral_solar_representation>" \ --output_data_dir "<${i}_4_path_to_openchat_mixtral_solar_representation_fnan>" done ``` The final processed data is at `<${i}_4_path_to_openchat_mixtral_solar_representation_fnan>`. ## Pairwise Knowledge Fusion We show the scripts for pairwise knowledge fusion. ```bash # OpenChat-3.5-7B <-> NH2-Mixtral-8x7B export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --nproc_per_node=8 --master_port=20001 /train/train.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "<0_4_path_to_openchat_mixtral_solar_representation_fnan>,<1_4_path_to_openchat_mixtral_solar_representation_fnan>,<2_4_path_to_openchat_mixtral_solar_representation_fnan>,<3_4_path_to_openchat_mixtral_solar_representation_fnan>" \ --bf16 True \ --output_dir "<path_to_save_openchat_mixtral_ckpt>" \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "epoch" \ --save_steps 10000 \ --save_total_limit 5 \ --learning_rate 5e-6 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --conv_temp "openchat" \ --lazy_preprocess True \ --flash_attn_transformers True \ --do_train \ --do_distill \ --distill_with_ref_model True \ --distill_with_aligned_model_0 True \ --distill_with_aligned_model_1 False \ --distill_loss_type "ce" \ --distill_teacher_temperature 1.0 \ --lm_loss_weight 0.9 \ --distill_greater_as_gt True \ --distill_greater_as_gt_type hard \ --dataloader_num_workers 8 \ --remove_unused_columns False # OpenChat-3.5-7B <-> NH2-Solar-10.7B export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --nproc_per_node=8 --master_port=20001 /train/train.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "<0_4_path_to_openchat_mixtral_solar_representation_fnan>,<1_4_path_to_openchat_mixtral_solar_representation_fnan>,<2_4_path_to_openchat_mixtral_solar_representation_fnan>,<3_4_path_to_openchat_mixtral_solar_representation_fnan>" \ --bf16 True \ --output_dir "<path_to_save_openchat_solar_ckpt>" \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "epoch" \ --save_steps 10000 \ --save_total_limit 5 \ --learning_rate 5e-6 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --conv_temp "openchat" \ --lazy_preprocess True \ --flash_attn_transformers True \ --do_train \ --do_distill \ --distill_with_ref_model True \ --distill_with_aligned_model_0 False \ --distill_with_aligned_model_1 True \ --distill_loss_type "ce" \ --distill_teacher_temperature 1.0 \ --lm_loss_weight 0.9 \ --distill_greater_as_gt True \ --distill_greater_as_gt_type hard \ --dataloader_num_workers 8 \ --remove_unused_columns False ``` ## Model Merging We show the scripts to obtain the final FuseChat using different merging methods. ```bash # For "slerp", "ta", "ties", and "dare" methods export CUDA_VISIBLE_DEVICES=0 mergekit-yaml merge/mergekit_configs/fusechat-slerp.yml "<path_to_save_fusechat_7b_slerp>" mergekit-yaml merge/mergekit_configs/fusechat-ta.yml "<path_to_save_fusechat_7b_ta>" mergekit-yaml merge/mergekit_configs/fusechat-ties.yml "<path_to_save_fusechat_7b_ties>" mergekit-yaml merge/mergekit_configs/fusechat-dare.yml "<path_to_save_fusechat_7b_dare>" # For "linear" method python merge/VaRM/merge.py \ --merged_model_names "FuseAI/OpenChat-3.5-7B-Mixtral,FuseAI/OpenChat-3.5-7B-Solar" \ --merged_model_save_dir "<path_to_save_fusechat_7b_linear>" \ --merge_method "linear" \ --linear_weights "1,2" # For our "varm" method python merge/VaRM/analysis.py \ --model1_path "FuseAI/OpenChat-3.5-7B-Mixtral" \ --model2_path "FuseAI/OpenChat-3.5-7B-Solar" \ --save_path "<path_to_save_analysis_result>/analysis.json" \ --merge_type "square" python merge/VaRM/merge.py \ --merged_model_names "FuseAI/OpenChat-3.5-7B-Mixtral,FuseAI/OpenChat-3.5-7B-Solar" \ --analysis_result "<path_to_save_analysis_result>/analysis.json" \ --merged_model_save_dir "<path_to_save_fusechat_7b_varm>" \ --merge_method "avg_param" \ --merge_type "square" ``` ## Evaluation We evaluate FuseChat on MT-Bench, which comprises 80 multi-turn dialogues spanning writing, roleplay, reasoning, math, coding, stem, and humanities domains. Please download the [official code](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) and follow the guidelines for evaluation. We provide the scripts for our evaluation. ```bash # Step 1. Generate model answers to MT-bench questions export CUDA_VISIBLE_DEVICES=0,1 python gen_model_answer.py \ --model-path "FuseAI/FuseChat-7B-VaRM" \ --model-id "openchat_3.5_fusechat_7b_varm" \ --num-gpus-per-model 1 \ --num-gpus-total 2 # Step 2. Generate GPT-4 judgments export OPENAI_API_KEY=XXXXXX # set the OpenAI API key python gen_judgment.py \ --parallel 2 # Step 3. Show MT-bench scores python show_result.py ``` ## Citation If you find this work is relevant with your research or applications, please feel free to cite our work! ``` @article{wan2024fusechat, title={FuseChat: Knowledge Fusion of Chat Models}, author={Fanqi Wan and Ziyi Yang and Longguang Zhong and Xiaojun Quan and Xinting Huang and Wei Bi}, journal={arXiv preprint arXiv:2402.16107}, year={2024} } ```
LoneStriker/FuseChat-7B-Slerp-3.0bpw-h6-exl2
LoneStriker
2024-02-28T16:23:51Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mixtral", "solar", "model-fusion", "fusechat", "conversational", "en", "arxiv:2402.16107", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T16:22:28Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - mistral - mixtral - solar - model-fusion - fusechat library_name: transformers --- <p align="center" width="100%"> </p> <div id="top" align="center"> <p style="font-size: 32px; font-weight: bold;">FuseChat: Knowledge Fusion of Chat Models</p> <h4> |<a href="https://arxiv.org/abs/2402.16107"> 📑 Paper </a> | <a href="https://huggingface.co/FuseAI"> 🤗 Huggingface Repo </a> | <a href="https://github.com/fanqiwan/FuseLLM"> 🐱 Github Repo </a> | </h4> <!-- **Authors:** --> _**Fanqi Wan, Ziyi Yang, Longguang Zhong, Xiaojun Quan, Xinting Huang, Wei Bi**_ <!-- **Affiliations:** --> _Sun Yat-sen University_ <p align="center"> <img src="./assets/fig_0.png" width="70%"> <br> </p> </div> ## News - **Feb 26, 2024:** 🔥 We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). FuseChat-7B-VaRM achieves an average performance of **8.22** on MT-Bench, outperforming various powerful chat LLMs at 7B and 34B scales like [Starling-7B](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) and [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat), even surpassing [GPT-3.5 (March)](https://platform.openai.com/docs/models/gpt-3-5-turbo), [Claude-2.1](https://www.anthropic.com/news/claude-2-1), and approaching [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). ## Contents - [Overview](#overview) - [Model Release](#model-release) - [Quick Start](#quick-start) - [Data Construction](#data-construction) - [Pairwise Knowledge Fusion](#pairwise-knowledge-fusion) - [Model Merging](#model-merging) - [Evaluation](#evaluation) - [Citation](#citation) ## Overview In this work, we propose an extended framework of FuseLLM to integrate the collective knowledge and individual strengths of multiple structure and scale-varied chat LLMs into a more powerful chat LLM, resulting in FuseChat. FuseChat adopts a fuse-then-merge strategy with two main stages. Firstly, it undertakes pairwise knowledge fusion for source LLMs to derive multiple target LLMs of identical structure and size via lightweight fine-tuning. Then, these target LLMs are merged within the parameter space, wherein we propose a novel method VaRM for determining the merging weights based on the variation ratio of parameter matrices before and after fine-tuning. Moreover, we argue that the concept of knowledge fusion adopted by both FuseChat and FuseLLM shares a fundamentally similar purpose with other related topics, such as the recently popular topic of mixture of experts (MoEs), because they all aim to leverage the strengths of multiple models (experts). However, while MoEs require loading multiple experts during inference, which has higher memory requirements, knowledge fusion supports the integration of multiple LLMs with diverse architectures into a single LLM without any additional memory requirement, making it more memory-efficient. <p align="center"> <img src="./assets/fig_1.png" width="95%"> <br> </p> ## Model Release We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). To support a plug-and-play fusion of new source LLM, we release our target LLMs: [OpenChat-3.5-7B-Solar](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Solar) and [OpenChat-3.5-7B-Mixtral](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Mixtral), which are obtained from pair-wise knowledge fusion. Integrating a new source LLM at any scale requires only obtaining a target LLM from the new source LLM and merging it with the existing target LLMs. We also release FuseChat with other merging methods: [FuseChat-7B-SLERP](https://huggingface.co/FuseAI/FuseChat-7B-SLERP) and [FuseChat-7B-TA](https://huggingface.co/FuseAI/FuseChat-7B-TA), which achieves an average performance of **8.19** and **8.20** on MT-Bench respectively. Here are the evaluation results. <p align="center"> <img src="./assets/tab_1.png" width="95%"> <br> </p> ## Quick Start ### Setup We use `python 3.11` in this project. Then, we have to install all the libraries listed in `requirements.txt`. ```bash pip install -r requirements.txt ``` ### Usage Here's how you can run the model using the 🤗 Transformers: ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("FuseAI/FuseChat-7B-VaRM") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] ``` The GPT4 template is also available as the integrated `tokenizer.chat_template`, which can be used instead of manually specifying the template: ```python messages = [ {"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi"}, {"role": "user", "content": "How are you today?"} ] tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True) assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] ``` ## Data Construction We curated a comprehensive training dataset, [FuseChat-Mixture](https://huggingface.co/datasets/FuseAI/FuseChat-Mixture), from various sources. This dataset covers different styles and capabilities, featuring both human-written and model-generated, and spanning general instruction-following and specific skills. Here we show the scripts to obtain representations from multiple source LLMs for model fusion. 1. Get representations for each source LLM ```bash # We split the dataset into 4 splits, then process each split on one or multiple GPU. # OpenChat-3.5-7B export CUDA_VISIBLE_DEVICES=0 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_openchat_representation>" \ --tknz_dataset_path "<${i}_4_path_to_openchat_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 32 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --dataset_split_num 4 \ --dataset_index ${i} done # NH2-Mixtral-8x7B export CUDA_VISIBLE_DEVICES=0,1,2 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_mixtral_representation>" \ --tknz_dataset_path "<${i}_4_path_to_mixtral_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 4 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --device_map "auto" \ --dataset_split_num 4 \ --dataset_index ${i} done # NH2-Solar-10.7B export CUDA_VISIBLE_DEVICES=0 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "NousResearch/Nous-Hermes-2-SOLAR-10.7B" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_solar_representation>" \ --tknz_dataset_path "<${i}_4_path_to_solar_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 8 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --dataset_split_num 4 \ --dataset_index ${i} done ``` 2. Align representations from different source LLMs ```bash # Since the tokenizers and vocabularies of these source LLMs are identical, we do not align. # OpenChat-3.5-7B <-> NH2-Mixtral-8x7B for i in {0..3}; do python /train/replace_model.py \ --dataset_dir "<${i}_4_path_to_openchat_representation>" \ --replace_dataset_dir "<${i}_4_path_to_mixtral_representation>" \ --dataset_save_dir "<${i}_4_path_to_openchat_mixtral_representation>" \ --preprocessing_num_workers 64 \ --batch_size 1000 \ --replace_model model_0 done # OpenChat-3.5-7B <-> NH2-Solar-10.7B for i in {0..3}; do python /train/replace_model.py \ --dataset_dir "<${i}_4_path_to_openchat_mixtral_representation>" \ --replace_dataset_dir "<${i}_4_path_to_solar_representation>" \ --dataset_save_dir "<${i}_4_path_to_openchat_mixtral_solar_representation>" \ --preprocessing_num_workers 64 \ --batch_size 1000 \ --replace_model model_1 done ``` 3. Filter instances with NaN loss in the dataset ```bash for i in {0..3}; do python /train/filter_nan.py \ --input_data_dir "<${i}_4_path_to_openchat_mixtral_solar_representation>" \ --output_data_dir "<${i}_4_path_to_openchat_mixtral_solar_representation_fnan>" done ``` The final processed data is at `<${i}_4_path_to_openchat_mixtral_solar_representation_fnan>`. ## Pairwise Knowledge Fusion We show the scripts for pairwise knowledge fusion. ```bash # OpenChat-3.5-7B <-> NH2-Mixtral-8x7B export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --nproc_per_node=8 --master_port=20001 /train/train.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "<0_4_path_to_openchat_mixtral_solar_representation_fnan>,<1_4_path_to_openchat_mixtral_solar_representation_fnan>,<2_4_path_to_openchat_mixtral_solar_representation_fnan>,<3_4_path_to_openchat_mixtral_solar_representation_fnan>" \ --bf16 True \ --output_dir "<path_to_save_openchat_mixtral_ckpt>" \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "epoch" \ --save_steps 10000 \ --save_total_limit 5 \ --learning_rate 5e-6 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --conv_temp "openchat" \ --lazy_preprocess True \ --flash_attn_transformers True \ --do_train \ --do_distill \ --distill_with_ref_model True \ --distill_with_aligned_model_0 True \ --distill_with_aligned_model_1 False \ --distill_loss_type "ce" \ --distill_teacher_temperature 1.0 \ --lm_loss_weight 0.9 \ --distill_greater_as_gt True \ --distill_greater_as_gt_type hard \ --dataloader_num_workers 8 \ --remove_unused_columns False # OpenChat-3.5-7B <-> NH2-Solar-10.7B export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --nproc_per_node=8 --master_port=20001 /train/train.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "<0_4_path_to_openchat_mixtral_solar_representation_fnan>,<1_4_path_to_openchat_mixtral_solar_representation_fnan>,<2_4_path_to_openchat_mixtral_solar_representation_fnan>,<3_4_path_to_openchat_mixtral_solar_representation_fnan>" \ --bf16 True \ --output_dir "<path_to_save_openchat_solar_ckpt>" \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "epoch" \ --save_steps 10000 \ --save_total_limit 5 \ --learning_rate 5e-6 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --conv_temp "openchat" \ --lazy_preprocess True \ --flash_attn_transformers True \ --do_train \ --do_distill \ --distill_with_ref_model True \ --distill_with_aligned_model_0 False \ --distill_with_aligned_model_1 True \ --distill_loss_type "ce" \ --distill_teacher_temperature 1.0 \ --lm_loss_weight 0.9 \ --distill_greater_as_gt True \ --distill_greater_as_gt_type hard \ --dataloader_num_workers 8 \ --remove_unused_columns False ``` ## Model Merging We show the scripts to obtain the final FuseChat using different merging methods. ```bash # For "slerp", "ta", "ties", and "dare" methods export CUDA_VISIBLE_DEVICES=0 mergekit-yaml merge/mergekit_configs/fusechat-slerp.yml "<path_to_save_fusechat_7b_slerp>" mergekit-yaml merge/mergekit_configs/fusechat-ta.yml "<path_to_save_fusechat_7b_ta>" mergekit-yaml merge/mergekit_configs/fusechat-ties.yml "<path_to_save_fusechat_7b_ties>" mergekit-yaml merge/mergekit_configs/fusechat-dare.yml "<path_to_save_fusechat_7b_dare>" # For "linear" method python merge/VaRM/merge.py \ --merged_model_names "FuseAI/OpenChat-3.5-7B-Mixtral,FuseAI/OpenChat-3.5-7B-Solar" \ --merged_model_save_dir "<path_to_save_fusechat_7b_linear>" \ --merge_method "linear" \ --linear_weights "1,2" # For our "varm" method python merge/VaRM/analysis.py \ --model1_path "FuseAI/OpenChat-3.5-7B-Mixtral" \ --model2_path "FuseAI/OpenChat-3.5-7B-Solar" \ --save_path "<path_to_save_analysis_result>/analysis.json" \ --merge_type "square" python merge/VaRM/merge.py \ --merged_model_names "FuseAI/OpenChat-3.5-7B-Mixtral,FuseAI/OpenChat-3.5-7B-Solar" \ --analysis_result "<path_to_save_analysis_result>/analysis.json" \ --merged_model_save_dir "<path_to_save_fusechat_7b_varm>" \ --merge_method "avg_param" \ --merge_type "square" ``` ## Evaluation We evaluate FuseChat on MT-Bench, which comprises 80 multi-turn dialogues spanning writing, roleplay, reasoning, math, coding, stem, and humanities domains. Please download the [official code](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) and follow the guidelines for evaluation. We provide the scripts for our evaluation. ```bash # Step 1. Generate model answers to MT-bench questions export CUDA_VISIBLE_DEVICES=0,1 python gen_model_answer.py \ --model-path "FuseAI/FuseChat-7B-VaRM" \ --model-id "openchat_3.5_fusechat_7b_varm" \ --num-gpus-per-model 1 \ --num-gpus-total 2 # Step 2. Generate GPT-4 judgments export OPENAI_API_KEY=XXXXXX # set the OpenAI API key python gen_judgment.py \ --parallel 2 # Step 3. Show MT-bench scores python show_result.py ``` ## Citation If you find this work is relevant with your research or applications, please feel free to cite our work! ``` @article{wan2024fusechat, title={FuseChat: Knowledge Fusion of Chat Models}, author={Fanqi Wan and Ziyi Yang and Longguang Zhong and Xiaojun Quan and Xinting Huang and Wei Bi}, journal={arXiv preprint arXiv:2402.16107}, year={2024} } ```
LiukG/mus_promoter-finetuned-lora-NT-500m-human-ref
LiukG
2024-02-28T16:22:10Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "esm", "text-classification", "generated_from_trainer", "base_model:InstaDeepAI/nucleotide-transformer-500m-human-ref", "base_model:finetune:InstaDeepAI/nucleotide-transformer-500m-human-ref", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T16:19:06Z
--- license: cc-by-nc-sa-4.0 base_model: InstaDeepAI/nucleotide-transformer-500m-human-ref tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: mus_promoter-finetuned-lora-NT-500m-human-ref results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mus_promoter-finetuned-lora-NT-500m-human-ref This model is a fine-tuned version of [InstaDeepAI/nucleotide-transformer-500m-human-ref](https://huggingface.co/InstaDeepAI/nucleotide-transformer-500m-human-ref) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4766 - F1: 0.8732 - Mcc Score: 0.7192 - Accuracy: 0.8594 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Mcc Score | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:--------:| | 0.7799 | 0.43 | 100 | 0.4819 | 0.8889 | 0.7210 | 0.8594 | | 0.4117 | 0.85 | 200 | 0.6728 | 0.74 | 0.1475 | 0.5938 | | 0.352 | 1.28 | 300 | 0.3396 | 0.9014 | 0.7826 | 0.8906 | | 0.2747 | 1.71 | 400 | 0.3458 | 0.9067 | 0.7750 | 0.8906 | | 0.2279 | 2.14 | 500 | 0.3053 | 0.9143 | 0.8181 | 0.9062 | | 0.2304 | 2.56 | 600 | 0.4057 | 0.8919 | 0.7437 | 0.875 | | 0.1362 | 2.99 | 700 | 0.5446 | 0.8657 | 0.7390 | 0.8594 | | 0.0391 | 3.42 | 800 | 0.7635 | 0.8889 | 0.7210 | 0.8594 | | 0.04 | 3.85 | 900 | 0.4871 | 0.9231 | 0.8108 | 0.9062 | | 0.0333 | 4.27 | 1000 | 0.4766 | 0.8732 | 0.7192 | 0.8594 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
HexawareTech/chat-faq
HexawareTech
2024-02-28T16:21:47Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-28T16:21:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LiukG/mus_promoter-finetuned-lora-bert-large-t2t
LiukG
2024-02-28T16:16:51Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "custom_code", "base_model:AIRI-Institute/gena-lm-bert-large-t2t", "base_model:finetune:AIRI-Institute/gena-lm-bert-large-t2t", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T16:14:57Z
--- base_model: AIRI-Institute/gena-lm-bert-large-t2t tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: mus_promoter-finetuned-lora-bert-large-t2t results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mus_promoter-finetuned-lora-bert-large-t2t This model is a fine-tuned version of [AIRI-Institute/gena-lm-bert-large-t2t](https://huggingface.co/AIRI-Institute/gena-lm-bert-large-t2t) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1108 - F1: 0.9867 - Mcc Score: 0.9683 - Accuracy: 0.9844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Mcc Score | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:--------:| | 0.351 | 0.43 | 100 | 0.2729 | 0.9429 | 0.8814 | 0.9375 | | 0.1319 | 0.85 | 200 | 0.3380 | 0.9474 | 0.8724 | 0.9375 | | 0.1245 | 1.28 | 300 | 0.1139 | 0.9737 | 0.9373 | 0.9688 | | 0.0909 | 1.71 | 400 | 0.2115 | 0.9600 | 0.9039 | 0.9531 | | 0.0526 | 2.14 | 500 | 0.1748 | 0.9737 | 0.9373 | 0.9688 | | 0.0355 | 2.56 | 600 | 0.0314 | 0.9867 | 0.9683 | 0.9844 | | 0.0227 | 2.99 | 700 | 0.0849 | 0.9867 | 0.9683 | 0.9844 | | 0.0004 | 3.42 | 800 | 0.0131 | 0.9867 | 0.9683 | 0.9844 | | 0.0075 | 3.85 | 900 | 0.1264 | 0.9867 | 0.9683 | 0.9844 | | 0.0003 | 4.27 | 1000 | 0.1108 | 0.9867 | 0.9683 | 0.9844 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
aiflows/JarvisFlowModule
aiflows
2024-02-28T16:16:04Z
0
0
null
[ "region:us" ]
null
2024-01-03T23:52:27Z
**For a detailed introduction to Jarvis, including Jarvis structures, an example run, etc, visit: https://huggingface.co/aiflows/JarvisFlowModule/blob/main/Introduction_to_Jarvis.md** # Table of Contents * [JarvisFlow](#JarvisFlow) * [JarvisFlow](#JarvisFlow.JarvisFlow) * [Controller\_JarvisFlow](#Controller_JarvisFlow) * [Controller\_JarvisFlow](#Controller_JarvisFlow.Controller_JarvisFlow) * [\_\_init\_\_](#Controller_JarvisFlow.Controller_JarvisFlow.__init__) * [instantiate\_from\_config](#Controller_JarvisFlow.Controller_JarvisFlow.instantiate_from_config) * [run](#Controller_JarvisFlow.Controller_JarvisFlow.run) * [UpdatePlanAtomicFlow](#UpdatePlanAtomicFlow) * [UpdatePlanAtomicFlow](#UpdatePlanAtomicFlow.UpdatePlanAtomicFlow) * [run](#UpdatePlanAtomicFlow.UpdatePlanAtomicFlow.run) * [Planner\_JarvisFlow](#Planner_JarvisFlow) * [Planner\_JarvisFlow](#Planner_JarvisFlow.Planner_JarvisFlow) * [detect\_finish\_or\_continue](#Planner_JarvisFlow.Planner_JarvisFlow.detect_finish_or_continue) * [run\_Jarvis](#run_Jarvis) * [CtrlExMem\_JarvisFlow](#CtrlExMem_JarvisFlow) * [CtrlExMem\_JarvisFlow](#CtrlExMem_JarvisFlow.CtrlExMem_JarvisFlow) * [detect\_finish\_or\_continue](#CtrlExMem_JarvisFlow.CtrlExMem_JarvisFlow.detect_finish_or_continue) * [\_\_init\_\_](#__init__) * [IntermediateAns\_Jarvis](#IntermediateAns_Jarvis) * [IntermediateAns\_Jarvis](#IntermediateAns_Jarvis.IntermediateAns_Jarvis) * [FinalAns\_Jarvis](#FinalAns_Jarvis) * [FinalAns\_Jarvis](#FinalAns_Jarvis.FinalAns_Jarvis) * [run](#FinalAns_Jarvis.FinalAns_Jarvis.run) <a id="JarvisFlow"></a> # JarvisFlow <a id="JarvisFlow.JarvisFlow"></a> ## JarvisFlow Objects ```python class JarvisFlow(AbstractBossFlow) ``` JarvisFlow is a flow module for the boss Jarvis. It inherits from AbstractBossFlow. ( https://huggingface.co/aiflows/AbstractBossFlowModule/tree/main). Jarvis is a general purpose agent empowered by multiple large language models and tools including a code interpreter, to take task commands in natural language, and make plans, write and run code in an interactive fashion to finish the task. The highlight of Jarvis is that it integrates 17 large language models, each of them prompted differently to achieve seamless inter-model communication and model-user interaction. The structure of Jarvis ensures that it is much more robust, flexible and memory-efficient than previous agents empowered by one single model. What's more, Jarvis integrates modules to allow for llm's memory management, ensuring persisted mid-long term memory and efficient short-term memory management, making its life duration much longer than single-modeled agents, and more powerful, in that it is able to accumulate important knowledge e.g. code library. Jarvis can also take response from the user and the environment (e.g. code execution result), and spontaneously re-plan and re-execute to make the execution more robust and reliable. *Configuration Parameters*: - `memory_files` (dict): mem_name-memfile_path pairs. mem_name is the name of the memory (plan, logs, code_library), and memfile_path is the path to the corresponding memory file. Configure this either in the .yaml file, or override the `memory_files` entry when running the flow. - `subflows_config` (dict): configs for subflows. - `MemoryReading`: Module used to read in memory (https://huggingface.co/aiflows/MemoryReadingFlowModule), output interface configured so that it outputs the neeed memory. - `Planner`: Module used to interactively write plans for Jarvis, the planner is implemented in the JarvisFlow. - `CtrlExMem`: Module used to execute the plan in a controller-executor manner, and update the memory. It is implemented in the JarvisFlow. **The code interpreter of Jarvis (https://huggingface.co/aiflows/InterpreterFlowModule) relies on open-interpreter (https://github.com/KillianLucas/open-interpreter) We are extracting the specific code from open-interpreter because the litellm version of open-interpreter is not compatible with that of the current version of aiflows(v.0.1.7).** <a id="Controller_JarvisFlow"></a> # Controller\_JarvisFlow <a id="Controller_JarvisFlow.Controller_JarvisFlow"></a> ## Controller\_JarvisFlow Objects ```python class Controller_JarvisFlow(ChatAtomicFlow) ``` This class is a controller for JarvisFlow, it takes the plan generated by the planner, logs of previous executions, depending on the initial goal or the subsequent feedback from the branching executors (and the human), to decide which executor to call next (or to exit by calling finish). *Configuration Parameters*: - `commands` (dict): a dictionary of commands that the controller can call, each command has a name, a description, and a list of input arguments. The commands will be injected into the system message prompt template. - `system_message_prompt_template` (str): the template for the system message prompt, there are several components needs to be injected into the template, including the commands, plan, plan_file_location, logs, and the goal. The injection of commands is done then initalizing the flow, the rest of the components are injected at the beginning of each run. - `previous_messages` (int): a sliding window of previous messages that will be passed to the model. This is the central part of short-term memory management. *Input Interface Non Initialized*: - `goal` (str): the initial goal of the conversation, this is the input to the model. - `memory_files` (dict): a dictionary of file locations that contains the plan, logs. - `plan` (str): the plan generated by the planner, the plan will change (marked as done, or re-plan) as execution preceeds. - `logs` (str): the logs of previous executions, the logs will be appended as execution preceeds. *Input Interface Initialized*: - `result` (str): the result of the previous execution, this is the input to the model. - `memory_files` (dict): a dictionary of file locations that contains the plan, logs. - `plan` (str): the plan generated by the planner, the plan will change (marked as done, or re-plan) as execution preceeds. - `logs` (str): the logs of previous executions, the logs will be appended as execution preceeds. - `goal` (str): the initial goal, this is kept because the goal is also injected into the system prompts so that Jarvis does not forget what the goal is, when the memory sliding window is implemented. *Output Interface*: - `command` (str): the command to be executed by the executor. - `command_args` (dict): the arguments of the command to be executed by the executor. <a id="Controller_JarvisFlow.Controller_JarvisFlow.__init__"></a> #### \_\_init\_\_ ```python def __init__(commands: List[Command], **kwargs) ``` Initialize the flow, inject the commands into the system message prompt template. **Arguments**: - `commands` (`List[Command]`): a list of commands that the controller can call. - `kwargs` (`Dict[str, Any]`): other parameters. <a id="Controller_JarvisFlow.Controller_JarvisFlow.instantiate_from_config"></a> #### instantiate\_from\_config ```python @classmethod def instantiate_from_config(cls, config) ``` Setting up the flow from the config file. In particular, setting up the prompts, backend, and commands. **Arguments**: - `config` (`Dict[str, Any]`): the config file. **Returns**: `Controller_JarvisFlow`: the instantiated flow. <a id="Controller_JarvisFlow.Controller_JarvisFlow.run"></a> #### run ```python def run(input_data: Dict[str, Any]) -> Dict[str, Any] ``` Run the flow, update the system prompts, and run the model. **Arguments**: - `input_data` (`Dict[str, Any]`): the input data to the flow. **Returns**: `Dict[str, Any]`: the output of the flow. <a id="UpdatePlanAtomicFlow"></a> # UpdatePlanAtomicFlow <a id="UpdatePlanAtomicFlow.UpdatePlanAtomicFlow"></a> ## UpdatePlanAtomicFlow Objects ```python class UpdatePlanAtomicFlow(AtomicFlow) ``` This class is used to update the plan file with the updated plan, called by the controller, when it realizes one step of the plan is done, and provide the updated plan, it is exactly the same as the old plan, except the step that is done is marked as done. *Input Interface*: - `updated_plan`: the updated plan, exactly the same as the old plan, except the step that is done is marked as done. *Output Interface*: - `result`: the result of the operation *Configuration Parameters*: - `input_interface`: the input interface of the atomic flow - `output_interface`: the output interface of the atomic flow <a id="UpdatePlanAtomicFlow.UpdatePlanAtomicFlow.run"></a> #### run ```python def run(input_data: Dict[str, Any]) ``` Run the atomic flow. **Arguments**: - `input_data` (`Dict[str, Any]`): the input data **Returns**: `Dict[str, Any]`: the result of the operation <a id="Planner_JarvisFlow"></a> # Planner\_JarvisFlow <a id="Planner_JarvisFlow.Planner_JarvisFlow"></a> ## Planner\_JarvisFlow Objects ```python class Planner_JarvisFlow(PlanWriterFlow) ``` This flow inherits from PlanWriterFlow (https://huggingface.co/aiflows/PlanWriterFlowModule), and is used to generate a plan for Jarvis. *Input Interface*: - `goal` (str): the goal of the planner, the goal comes from the user's query when calling Jarvis. - `memory_files` (dict): a dictionary of memory files, the keys are the names of the memory files, the values are the locations of the memory files. *Output Interfaces*: - `plan` (str): the generated plan, the plan string will be written to the plan file and returned to the flow state of the Jarvis flow. - `summary` (str): the summary of the planner. - `status` (str): the status of the planner, can be "finished" or "unfinished". *Configuration Parameters*: - Also refer to PlanWriterFlow (https://huggingface.co/aiflows/PlanWriterFlowModule/blob/main/PlanWriterFlow.py) for more configuration parameters. - `input_interface`: the input interface of the flow. - `output_interface`: the output interface of the flow. - `subflows_config`: the configuration of the subflows of the flow. - `early_exit_key`: the key of the early exit signal in the output payload. - `topology`: the topology of the subflows. <a id="Planner_JarvisFlow.Planner_JarvisFlow.detect_finish_or_continue"></a> #### detect\_finish\_or\_continue ```python @CircularFlow.output_msg_payload_processor def detect_finish_or_continue(output_payload: Dict[str, Any], src_flow) -> Dict[str, Any] ``` This function is used to detect whether the planner should finish or continue. **Arguments**: - `output_payload` (`Dict[str, Any]`): the output payload of the flow. - `src_flow` (`Flow`): the flow that generates the output payload. **Returns**: `Dict[str, Any]`: the output payload of the flow. <a id="run_Jarvis"></a> # run\_Jarvis <a id="CtrlExMem_JarvisFlow"></a> # CtrlExMem\_JarvisFlow <a id="CtrlExMem_JarvisFlow.CtrlExMem_JarvisFlow"></a> ## CtrlExMem\_JarvisFlow Objects ```python class CtrlExMem_JarvisFlow(CtrlExMemFlow) ``` This class inherits from the CtrlExMemFlow class from AbstractBossFlowModule. See: https://huggingface.co/aiflows/AbstractBossFlowModule/blob/main/CtrlExMemFlow.py *Input Interface*: - `plan` - `memory_files` - `logs` - `goal` *Output Interface*: - `result` - `summary` *Configuration Parameters*: - `input_interface`: the input interface of the flow - `output_interface`: the output interface of the flow - `subflows_config`: the subflows configuration of the flow - `topology`: the topology of the subflows Take notice that: 1. In the controller, we only keep the previous 3 messages for memory management, that will be: a. The assistant message (controller's last command) b. Manually updated new system prompt (new logs, new plans, etc.) c. The user message (result, feedback) 2. Each time one executor from the branch is executed, the logs is updated, this means: a. The logs file of Jarvis is updated. b. After MemoryReading at the end of each run of the loop, the logs in the flow_state is updated. c. The next time the controller is called, the updated logs is injected into the system prompts. 3. In the prompts of the controller, when the controller realizes one step of the plan is done, we ask the controller to revise what was done and mark the current step as done. This means: a. The plan file is updated. b. The plan in the flow_state is updated. c. The next time the controller is called, the updated plan is injected into the system prompts. This is basically how the memory management works, to allow for more space for llm execution, and make sure the llm does not forget important information. <a id="CtrlExMem_JarvisFlow.CtrlExMem_JarvisFlow.detect_finish_or_continue"></a> #### detect\_finish\_or\_continue ```python @CircularFlow.output_msg_payload_processor def detect_finish_or_continue(output_payload: Dict[str, Any], src_flow) -> Dict[str, Any] ``` This function is called when the JarvisFlow receives a message from one of its branches. This function processes the message and decides whether the JarvisFlow should continue or finish. **Arguments**: - `output_payload` (`Dict[str, Any]`): the output payload of the branch - `src_flow` (`str`): the source flow of the message **Returns**: `Dict[str, Any]`: the updated output payload <a id="__init__"></a> # \_\_init\_\_ <a id="IntermediateAns_Jarvis"></a> # IntermediateAns\_Jarvis <a id="IntermediateAns_Jarvis.IntermediateAns_Jarvis"></a> ## IntermediateAns\_Jarvis Objects ```python class IntermediateAns_Jarvis(HumanStandardInputFlow) ``` This class inherits from the HumanStandardInputFlow class. It is used to give an intermediate answer to the user. The user is then able to provide feedback on the intermediate result. Depending on the user's feedback, the controller will decide to do different things (e.g. continue, re-plan, etc.) *Input Interface*: - `answer`: The intermediate answer to the question asked by the user. *Output Interface*: - `result`: User's response to the intermediate answer. - `summary`: A summary of the action. *Configuration parameters*: - `query_message_prompt_template`: The template of the message that is shown to the user. - `request_multi_line_input_flag`: A flag that indicates whether the user can give a multi-line input. - `end_of_input_string`: The string that indicates the end of the input. <a id="FinalAns_Jarvis"></a> # FinalAns\_Jarvis <a id="FinalAns_Jarvis.FinalAns_Jarvis"></a> ## FinalAns\_Jarvis Objects ```python class FinalAns_Jarvis(HumanStandardInputFlow) ``` This class inherits from the HumanStandardInputFlow class. It is used to give the final answer to the user. *Input Interface*: - `answer`: The answer to the question asked by the user. *Output Interface*: - `result`: User's response to the final answer. - `summary`: A summary of the action. *Configuration parameters*: - `query_message_prompt_template`: The template of the message that is shown to the user. - `request_multi_line_input_flag`: A flag that indicates whether the user can give a multi-line input. - `end_of_input_string`: The string that indicates the end of the input. <a id="FinalAns_Jarvis.FinalAns_Jarvis.run"></a> #### run ```python def run(input_data: Dict[str, Any]) -> Dict[str, Any] ``` The run method of the class. **Arguments**: - `input_data` (`Dict[str, Any]`): The input data of the flow. **Returns**: `Dict[str, Any]`: The output data of the flow.
LiukG/mus_promoter-finetuned-lora-NT-500m-1000g
LiukG
2024-02-28T16:08:19Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "esm", "text-classification", "generated_from_trainer", "base_model:InstaDeepAI/nucleotide-transformer-500m-1000g", "base_model:finetune:InstaDeepAI/nucleotide-transformer-500m-1000g", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T16:06:38Z
--- license: cc-by-nc-sa-4.0 base_model: InstaDeepAI/nucleotide-transformer-500m-1000g tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: mus_promoter-finetuned-lora-NT-500m-1000g results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mus_promoter-finetuned-lora-NT-500m-1000g This model is a fine-tuned version of [InstaDeepAI/nucleotide-transformer-500m-1000g](https://huggingface.co/InstaDeepAI/nucleotide-transformer-500m-1000g) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3065 - F1: 0.9351 - Mcc Score: 0.8414 - Accuracy: 0.9219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Mcc Score | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:--------:| | 0.6126 | 0.43 | 100 | 0.4697 | 0.8767 | 0.7135 | 0.8594 | | 0.3854 | 0.85 | 200 | 0.2682 | 0.9296 | 0.8460 | 0.9219 | | 0.4832 | 1.28 | 300 | 0.2444 | 0.9296 | 0.8460 | 0.9219 | | 0.3536 | 1.71 | 400 | 0.3433 | 0.9167 | 0.8113 | 0.9062 | | 0.3215 | 2.14 | 500 | 0.3475 | 0.9351 | 0.8414 | 0.9219 | | 0.2961 | 2.56 | 600 | 0.2347 | 0.9231 | 0.8108 | 0.9062 | | 0.2742 | 2.99 | 700 | 0.3438 | 0.9333 | 0.8395 | 0.9219 | | 0.2375 | 3.42 | 800 | 0.3448 | 0.9351 | 0.8414 | 0.9219 | | 0.2438 | 3.85 | 900 | 0.2789 | 0.9351 | 0.8414 | 0.9219 | | 0.2104 | 4.27 | 1000 | 0.3065 | 0.9351 | 0.8414 | 0.9219 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
Lienid/nous-ten
Lienid
2024-02-28T16:07:30Z
5
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T00:30:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LiukG/mus_promoter-finetuned-lora-bert-base-t2t
LiukG
2024-02-28T16:05:30Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "custom_code", "base_model:AIRI-Institute/gena-lm-bert-base-t2t", "base_model:finetune:AIRI-Institute/gena-lm-bert-base-t2t", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T16:05:08Z
--- base_model: AIRI-Institute/gena-lm-bert-base-t2t tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: mus_promoter-finetuned-lora-bert-base-t2t results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mus_promoter-finetuned-lora-bert-base-t2t This model is a fine-tuned version of [AIRI-Institute/gena-lm-bert-base-t2t](https://huggingface.co/AIRI-Institute/gena-lm-bert-base-t2t) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1792 - F1: 0.9577 - Mcc Score: 0.9094 - Accuracy: 0.9531 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Mcc Score | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:--------:| | 0.6514 | 0.43 | 100 | 0.4785 | 0.9351 | 0.8414 | 0.9219 | | 0.3646 | 0.85 | 200 | 0.9139 | 0.8276 | 0.5429 | 0.7656 | | 0.5499 | 1.28 | 300 | 0.2149 | 0.9600 | 0.9039 | 0.9531 | | 0.3001 | 1.71 | 400 | 0.3707 | 0.9351 | 0.8414 | 0.9219 | | 0.227 | 2.14 | 500 | 0.1903 | 0.9474 | 0.8724 | 0.9375 | | 0.2107 | 2.56 | 600 | 0.1515 | 0.9730 | 0.9359 | 0.9688 | | 0.1793 | 2.99 | 700 | 0.2371 | 0.9444 | 0.8749 | 0.9375 | | 0.1212 | 3.42 | 800 | 0.1112 | 0.9600 | 0.9039 | 0.9531 | | 0.1338 | 3.85 | 900 | 0.1401 | 0.9730 | 0.9359 | 0.9688 | | 0.0912 | 4.27 | 1000 | 0.1792 | 0.9577 | 0.9094 | 0.9531 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
Rasi1610/Death_Se46_model_p4
Rasi1610
2024-02-28T16:03:11Z
4
0
transformers
[ "transformers", "pytorch", "safetensors", "vision-encoder-decoder", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-02-28T10:01:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
xwvzr/brain-tumor-classification-cnn-tuned
xwvzr
2024-02-28T16:02:15Z
0
0
null
[ "region:us" ]
null
2024-02-28T15:23:13Z
# Brain Tumor Classification Model ## Overview This repository contains a deep learning model for classifying brain tumor images into different categories using convolutional neural networks (CNNs). The model is trained on a dataset consisting of MRI images of brain tumors. ## Model Architecture The model architecture used for this classification task is a convolutional neural network (CNN). The CNN consists of multiple convolutional layers followed by max-pooling layers to extract features from the input images. The extracted features are then passed through fully connected layers to perform classification. ## Dataset The dataset used can be accessed through the following link: [Brain Tumor MRI Dataset](https://www.kaggle.com/datasets/masoudnickparvar/brain-tumor-mri-dataset). ## Training The model is trained using the PyTorch deep learning framework. The training process involves optimizing the model parameters using the Adam optimizer and minimizing the categorical cross-entropy loss function. Training is performed on a GPU for faster computation. ## Evaluation The model is evaluated using various metrics such as accuracy. These metrics provide insights into the model's performance in classifying brain tumor images. ## Test Evaluation Average Validation Loss: 0.1160, Average Validation Accuracy: 0.9527 ## Final Evaluation Average Validation Loss: 0.0893, Average Validation Accuracy: 0.9658
nikitharao/catlm
nikitharao
2024-02-28T15:56:33Z
69
4
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "arxiv:2310.01602", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-09T16:37:54Z
--- license: mit --- # CAT-LM: Aligned <u>C</u>ode <u>A</u>nd <u>T</u>ests Language Model ### Model Description **CAT-LM** is a GPT-style language model with 2.7 Billion parameters, trained on a corpus of Python and Java projects (~260GB). It supports a maximum sequence length of 8,192 tokens. We utilize a novel pretraining signal that explicitly considers the mapping between code and test files when available. ### Publication [CAT-LM: Training Language Models on Aligned Code And Tests](https://arxiv.org/abs/2310.01602) [Nikitha Rao](https://raonikitha.github.io)\*, [Kush Jain](https://www.kushjain.com/)\*, [Uri Alon](https://urialon.ml), [Claire Le Goues](https://clairelegoues.com), and [Vincent J. Hellendoorn](http://vhellendoorn.github.io)\ 38th IEEE/ACM International Conference on Automated Software Engineering (ASE 2023) ### Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('nikitharao/catlm', use_fast = False) model = AutoModelForCausalLM.from_pretrained('nikitharao/catlm') prompt = """ def add(x,y): \"\"\"Add two numbers x and y\"\"\" return x+y <|codetestpair|> """ print('Input prompt:') print(prompt) input_ids = tokenizer(prompt, return_tensors="pt").input_ids # The model was trained without the `</s>` token and should be removed. if tokenizer.decode(input_ids[0,-1]) == '</s>': input_ids = input_ids[:,:-1] print(input_ids) len_input = input_ids.shape[1] sample_output = model.generate( input_ids, do_sample=True, max_new_tokens = 512, top_k=50, top_p=0.95, temperature=0.2 ) generated_output = sample_output[0][len_input:] output = tokenizer.decode(generated_output, skip_special_tokens=True) print('Output:') print(output) ``` <b>Note:</b> The model was trained without the `</s>` token and should be removed. Please see https://github.com/RaoNikitha/CAT-LM for more details.
Danung/model-cnn-cifar10-vgg19
Danung
2024-02-28T15:56:12Z
0
0
transformers
[ "transformers", "image-classification", "en", "dataset:cifar10", "license:artistic-2.0", "endpoints_compatible", "region:us" ]
image-classification
2024-02-28T07:36:10Z
--- license: artistic-2.0 datasets: - cifar10 language: - en library_name: transformers pipeline_tag: image-classification ---
namangarg110/hiera_base_224
namangarg110
2024-02-28T15:54:29Z
176
0
transformers
[ "transformers", "pytorch", "hiera", "image-feature-extraction", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
image-feature-extraction
2024-02-28T07:06:29Z
--- license: cc-by-nc-4.0 --- # Hiera (hiera_base_224) Hiera is a hierarchical transformer that is a much more efficient alternative to previous series of hierarchical transformers (ConvNeXT and Swin). Vanilla transformer architectures (Dosovitskiy et al. 2020) are very popular yet simple and scalable architectures that enable pretraining strategies such as MAE (He et al., 2022). However, they use the same spatial resolution and number of channels throughout the network, ViTs make inefficient use of their parameters. This is in contrast to prior “hierarchical” or “multi-scale” models (e.g., Krizhevsky et al. (2012); He et al. (2016)), which use fewer channels but higher spatial resolution in early stages with simpler features, and more channels but lower spatial resolution later in the model with more complex features. These models are way too complex though which add overhead operations to achieve state-of-the-art accuracy in ImageNet-1k, making the model slower. Hiera attempts to address this issue by teaching the model spatial biases by training MAE. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/ogkud4qc564bPX3f0bGXO.png)
pragsGit/whisper-tiny-minds14
pragsGit
2024-02-28T15:53:40Z
77
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-28T13:32:59Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-minds14 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.3481912144702842 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-minds14 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.7400 - Wer Ortho: 35.2624 - Wer: 0.3482 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.0009 | 17.24 | 500 | 0.7400 | 35.2624 | 0.3482 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.17.1 - Tokenizers 0.15.2
LiukG/mus_promoter-finetuned-lora-NT-v2-50m-ms
LiukG
2024-02-28T15:53:33Z
146
0
transformers
[ "transformers", "tensorboard", "safetensors", "esm", "text-classification", "generated_from_trainer", "custom_code", "base_model:InstaDeepAI/nucleotide-transformer-v2-50m-multi-species", "base_model:finetune:InstaDeepAI/nucleotide-transformer-v2-50m-multi-species", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T15:53:18Z
--- license: cc-by-nc-sa-4.0 base_model: InstaDeepAI/nucleotide-transformer-v2-50m-multi-species tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: mus_promoter-finetuned-lora-NT-v2-50m-ms results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mus_promoter-finetuned-lora-NT-v2-50m-ms This model is a fine-tuned version of [InstaDeepAI/nucleotide-transformer-v2-50m-multi-species](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-50m-multi-species) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1172 - F1: 0.9863 - Mcc Score: 0.9686 - Accuracy: 0.9844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Mcc Score | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:--------:| | 0.4189 | 0.43 | 100 | 0.4745 | 0.9067 | 0.7750 | 0.8906 | | 0.2804 | 0.85 | 200 | 0.1875 | 0.9589 | 0.9048 | 0.9531 | | 0.2198 | 1.28 | 300 | 0.1441 | 0.9730 | 0.9359 | 0.9688 | | 0.1346 | 1.71 | 400 | 0.0821 | 0.9863 | 0.9686 | 0.9844 | | 0.0875 | 2.14 | 500 | 0.1647 | 0.9730 | 0.9359 | 0.9688 | | 0.0554 | 2.56 | 600 | 0.0937 | 0.9863 | 0.9686 | 0.9844 | | 0.0314 | 2.99 | 700 | 0.1127 | 0.9863 | 0.9686 | 0.9844 | | 0.0268 | 3.42 | 800 | 0.1104 | 0.9863 | 0.9686 | 0.9844 | | 0.01 | 3.85 | 900 | 0.1146 | 0.9863 | 0.9686 | 0.9844 | | 0.0008 | 4.27 | 1000 | 0.1172 | 0.9863 | 0.9686 | 0.9844 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
Likz/my_awesome_food_model
Likz
2024-02-28T15:50:49Z
192
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-28T14:09:59Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_food_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6051 - Accuracy: 0.913 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7294 | 0.99 | 62 | 2.5134 | 0.847 | | 1.8388 | 2.0 | 125 | 1.7709 | 0.885 | | 1.5919 | 2.98 | 186 | 1.6051 | 0.913 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
lukasberndt/Huggy
lukasberndt
2024-02-28T15:50:21Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-02-28T15:46:25Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: lukasberndt/Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
LiukG/mus_promoter-finetuned-lora-NT-v2-100m-ms
LiukG
2024-02-28T15:48:04Z
146
0
transformers
[ "transformers", "tensorboard", "safetensors", "esm", "text-classification", "generated_from_trainer", "custom_code", "base_model:InstaDeepAI/nucleotide-transformer-v2-100m-multi-species", "base_model:finetune:InstaDeepAI/nucleotide-transformer-v2-100m-multi-species", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T15:47:41Z
--- license: cc-by-nc-sa-4.0 base_model: InstaDeepAI/nucleotide-transformer-v2-100m-multi-species tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: mus_promoter-finetuned-lora-NT-v2-100m-ms results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mus_promoter-finetuned-lora-NT-v2-100m-ms This model is a fine-tuned version of [InstaDeepAI/nucleotide-transformer-v2-100m-multi-species](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-100m-multi-species) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0387 - F1: 0.9867 - Mcc Score: 0.9683 - Accuracy: 0.9844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Mcc Score | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:--------:| | 0.4137 | 0.43 | 100 | 0.1128 | 0.9722 | 0.9385 | 0.9688 | | 0.2366 | 0.85 | 200 | 0.2834 | 0.9459 | 0.8719 | 0.9375 | | 0.1578 | 1.28 | 300 | 0.1574 | 0.9722 | 0.9385 | 0.9688 | | 0.0992 | 1.71 | 400 | 0.0585 | 0.9863 | 0.9686 | 0.9844 | | 0.1293 | 2.14 | 500 | 0.0860 | 0.9863 | 0.9686 | 0.9844 | | 0.0503 | 2.56 | 600 | 0.0916 | 0.9863 | 0.9686 | 0.9844 | | 0.0634 | 2.99 | 700 | 0.0265 | 0.9867 | 0.9683 | 0.9844 | | 0.007 | 3.42 | 800 | 0.0012 | 1.0 | 1.0 | 1.0 | | 0.0135 | 3.85 | 900 | 0.0015 | 1.0 | 1.0 | 1.0 | | 0.0113 | 4.27 | 1000 | 0.0387 | 0.9867 | 0.9683 | 0.9844 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
ghozyulhaq/REAAI_CNN_Ghozy
ghozyulhaq
2024-02-28T15:47:05Z
0
0
keras
[ "keras", "tf-keras", "region:us" ]
null
2024-02-28T15:47:03Z
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | True | | is_legacy_optimizer | False | | learning_rate | 0.0010000000474974513 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
DrAEZF6quN2YktvBc7jHxX/Eshw8R4V
DrAEZF6quN2YktvBc7jHxX
2024-02-28T15:46:08Z
171
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T15:45:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Parikshit-07/test
Parikshit-07
2024-02-28T15:40:30Z
0
0
null
[ "safetensors", "autotrain", "text-generation", "conversational", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T15:40:21Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
lvcalucioli/phi2_linear_question-answering_merged
lvcalucioli
2024-02-28T15:40:01Z
48
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-28T15:38:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MiguelGorilla/mistral_fine_tuned_merged
MiguelGorilla
2024-02-28T15:39:35Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "question-answering", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
question-answering
2024-02-28T15:24:20Z
--- library_name: transformers pipeline_tag: question-answering --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mishig/test-tips
mishig
2024-02-28T15:38:06Z
0
1
null
[ "region:us" ]
null
2024-02-28T15:36:12Z
# Testing different tips > [!NOTE] > Highlights information that users should take into account, even when skimming. > > Some more notes > [!TIP] > Optional information to help a user be more successful. > > Some more tips > [!IMPORTANT] > Crucial information necessary for users to succeed. > > Some more importance > [!WARNING] > Critical content demanding immediate user attention due to potential risks. > > Some more warning > [!CAUTION] > Negative potential consequences of an action. > > Some more caution
AmrutaMuthal/mero_scaled_filled_boxes_from_pretrained_controlnet_low_lr
AmrutaMuthal
2024-02-28T15:21:01Z
2
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2024-02-28T07:51:44Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KG5KEY/videomae-base-finetuned-gemep-epochs25
KG5KEY
2024-02-28T15:20:21Z
6
0
transformers
[ "transformers", "pytorch", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-02-28T04:05:58Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-gemep-epochs25 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-gemep-epochs25 This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4373 - Accuracy: 0.9123 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1350 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7648 | 0.04 | 54 | 1.7458 | 0.2698 | | 1.6977 | 1.04 | 108 | 1.5724 | 0.2698 | | 1.5208 | 2.04 | 162 | 1.5350 | 0.2857 | | 1.39 | 3.04 | 216 | 1.4793 | 0.4286 | | 1.3546 | 4.04 | 270 | 1.7122 | 0.2857 | | 0.9587 | 5.04 | 324 | 1.4137 | 0.3492 | | 1.0259 | 6.04 | 378 | 1.2302 | 0.6190 | | 0.7292 | 7.04 | 432 | 1.4772 | 0.3810 | | 0.6499 | 8.04 | 486 | 1.0864 | 0.5873 | | 0.5417 | 9.04 | 540 | 1.4085 | 0.4603 | | 0.5073 | 10.04 | 594 | 1.5436 | 0.5556 | | 0.5052 | 11.04 | 648 | 1.2583 | 0.6190 | | 0.419 | 12.04 | 702 | 1.1921 | 0.6349 | | 0.2677 | 13.04 | 756 | 1.0662 | 0.6825 | | 0.1601 | 14.04 | 810 | 0.7688 | 0.7143 | | 0.1784 | 15.04 | 864 | 1.2971 | 0.6508 | | 0.1338 | 16.04 | 918 | 0.9488 | 0.7937 | | 0.0875 | 17.04 | 972 | 1.0488 | 0.7619 | | 0.0748 | 18.04 | 1026 | 0.9617 | 0.8095 | | 0.0973 | 19.04 | 1080 | 1.0684 | 0.7460 | | 0.0443 | 20.04 | 1134 | 1.0433 | 0.7778 | | 0.0787 | 21.04 | 1188 | 0.8444 | 0.8254 | | 0.0035 | 22.04 | 1242 | 1.1333 | 0.7778 | | 0.0033 | 23.04 | 1296 | 0.9644 | 0.8095 | | 0.0686 | 24.04 | 1350 | 0.9196 | 0.8254 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0 - Datasets 2.12.0 - Tokenizers 0.13.3
LiukG/mus_promoter-finetuned-lora-NT-v2-500m-ms
LiukG
2024-02-28T15:19:47Z
149
0
transformers
[ "transformers", "tensorboard", "safetensors", "esm", "text-classification", "generated_from_trainer", "custom_code", "base_model:InstaDeepAI/nucleotide-transformer-v2-500m-multi-species", "base_model:finetune:InstaDeepAI/nucleotide-transformer-v2-500m-multi-species", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T15:17:41Z
--- license: cc-by-nc-sa-4.0 base_model: InstaDeepAI/nucleotide-transformer-v2-500m-multi-species tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: mus_promoter-finetuned-lora-NT-v2-500m-ms results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mus_promoter-finetuned-lora-NT-v2-500m-ms This model is a fine-tuned version of [InstaDeepAI/nucleotide-transformer-v2-500m-multi-species](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-500m-multi-species) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1042 - F1: 0.9863 - Mcc Score: 0.9686 - Accuracy: 0.9844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Mcc Score | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:--------:| | 0.5534 | 0.43 | 100 | 0.5989 | 0.8824 | 0.7646 | 0.875 | | 0.3298 | 0.85 | 200 | 0.1327 | 0.9730 | 0.9359 | 0.9688 | | 0.1827 | 1.28 | 300 | 0.0652 | 0.9867 | 0.9683 | 0.9844 | | 0.2014 | 1.71 | 400 | 0.2227 | 0.9600 | 0.9039 | 0.9531 | | 0.1183 | 2.14 | 500 | 0.0556 | 0.9863 | 0.9686 | 0.9844 | | 0.1052 | 2.56 | 600 | 0.2231 | 0.9577 | 0.9094 | 0.9531 | | 0.0781 | 2.99 | 700 | 0.1219 | 0.9730 | 0.9359 | 0.9688 | | 0.0477 | 3.42 | 800 | 0.1048 | 0.9863 | 0.9686 | 0.9844 | | 0.025 | 3.85 | 900 | 0.0978 | 0.9863 | 0.9686 | 0.9844 | | 0.0221 | 4.27 | 1000 | 0.1042 | 0.9863 | 0.9686 | 0.9844 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
biodatlab/distill-whisper-th-small
biodatlab
2024-02-28T15:19:17Z
184
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-16T08:34:22Z
--- license: mit --- ## Distilled Small Whisper ASR Model for Thai ### Model Description This is a distilled Automatic Speech Recognition (ASR) model, based on the Whisper architecture. It has been specifically tailored for Thai language speech recognition. The model features 4 decoder layers (vs 12 in teacher model) and has been distilled from a larger teacher model, focusing on enhancing performance and efficiency. #### Distillation Details - **Teacher Model**: Small Whisper ASR model - **Datasets Used for Distillation**: - Common Voice v13 - Gowajee - Thai Elderly Speech Corpus - Custom Scraped Data - Thai-Central Dialect from [SLSCU Thai Dialect Corpus](https://github.com/SLSCU/thai-dialect-corpus) ### Model Performance - **DeepCut Tokenized WER on Common Voice 13 Test Set**: - Distilled Model: **11.23%** - Teacher Model: **13.14%** This shows an improvement in Word Error Rate (WER), indicating enhanced accuracy in speech recognition tasks for the Thai language. ### Intended Use This model is intended for use in applications requiring Thai language speech recognition. ### Limitations - The model is specifically trained for the Thai language and may not perform well with other languages. - Performance might vary across different Thai dialects and accents. - As with any ASR system, background noise and speech clarity can impact recognition accuracy. ### Acknowledgments This model was developed using resources and datasets provided by the speech and language technology community. Special thanks to the teams behind Common Voice, Gowajee, SLSCU, and the Thai Elderly Speech Corpus for their valuable datasets. ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.15.0 ### Citation Cite using Bibtex: ``` @misc {thonburian_whisper_med, author = { Atirut Boribalburephan, Zaw Htet Aung, Knot Pipatsrisawat, Titipat Achakulvisut }, title = { Thonburian Whisper: A fine-tuned Whisper model for Thai automatic speech recognition }, year = 2022, url = { https://huggingface.co/biodatlab/distil-whisper-th-small }, doi = { 10.57967/hf/0226 }, publisher = { Hugging Face } } ``` ---
poojakabber1997/sft_llama_13b
poojakabber1997
2024-02-28T15:08:06Z
0
0
peft
[ "peft", "region:us" ]
null
2024-02-28T15:05:42Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0
Weni/ZeroShot-3.3.11-Mistral-7b-Multilanguage-3.2.0
Weni
2024-02-28T15:06:58Z
0
0
peft
[ "peft", "safetensors", "mistral", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-02-28T14:43:06Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.2 model-index: - name: ZeroShot-3.3.9-Mistral-7b-Multilanguage-3.2.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZeroShot-3.3.9-Mistral-7b-Multilanguage-3.2.0 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5036 | 0.06 | 100 | 0.4879 | | 0.4561 | 0.12 | 200 | 0.4455 | | 0.4255 | 0.19 | 300 | 0.4324 | | 0.4111 | 0.25 | 400 | 0.4228 | | 0.4102 | 0.31 | 500 | 0.4165 | | 0.4059 | 0.37 | 600 | 0.4102 | | 0.3959 | 0.43 | 700 | 0.4059 | | 0.3904 | 0.5 | 800 | 0.4008 | | 0.3902 | 0.56 | 900 | 0.3966 | | 0.3895 | 0.62 | 1000 | 0.3930 | | 0.3829 | 0.68 | 1100 | 0.3904 | | 0.3885 | 0.74 | 1200 | 0.3879 | | 0.3735 | 0.81 | 1300 | 0.3860 | | 0.385 | 0.87 | 1400 | 0.3851 | | 0.3773 | 0.93 | 1500 | 0.3846 | | 0.3693 | 0.99 | 1600 | 0.3844 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
poojakabber1997/sft_vicuna_13b
poojakabber1997
2024-02-28T15:05:22Z
0
0
peft
[ "peft", "region:us" ]
null
2024-02-28T15:04:55Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0
Geerath/Geerath_mistral_7b_web_QA
Geerath
2024-02-28T15:03:03Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-28T15:02:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaziyarPanahi/StructLM-34B-GGUF
MaziyarPanahi
2024-02-28T14:59:13Z
90
2
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "llama", "text-generation", "en", "dataset:TIGER-Lab/SKGInstruct", "arxiv:2402.16671", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:TIGER-Lab/StructLM-34B", "base_model:quantized:TIGER-Lab/StructLM-34B" ]
text-generation
2024-02-28T13:00:56Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - llama - text-generation - en - dataset:TIGER-Lab/SKGInstruct - arxiv:2402.16671 - license:mit - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: StructLM-34B-GGUF base_model: TIGER-Lab/StructLM-34B inference: false model_creator: TIGER-Lab pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/StructLM-34B-GGUF](https://huggingface.co/MaziyarPanahi/StructLM-34B-GGUF) - Model creator: [TIGER-Lab](https://huggingface.co/TIGER-Lab) - Original model: [TIGER-Lab/StructLM-34B](https://huggingface.co/TIGER-Lab/StructLM-34B) ## Description [MaziyarPanahi/StructLM-34B-GGUF](https://huggingface.co/MaziyarPanahi/StructLM-34B-GGUF) contains GGUF format model files for [TIGER-Lab/StructLM-34B](https://huggingface.co/TIGER-Lab/StructLM-34B). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/StructLM-34B-GGUF](https://huggingface.co/MaziyarPanahi/StructLM-34B-GGUF) and below it, a specific filename to download, such as: StructLM-34B-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/StructLM-34B-GGUF StructLM-34B-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/StructLM-34B-GGUF](https://huggingface.co/MaziyarPanahi/StructLM-34B-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/StructLM-34B-GGUF StructLM-34B-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m StructLM-34B-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./StructLM-34B-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./StructLM-34B-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
Usman1921/suit-style-fine-tune-sdxl-lora-50-images-own-caption
Usman1921
2024-02-28T14:54:24Z
9
2
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-02-28T10:30:16Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'A photo of <s0><s1> fashion model wearing ' output: url: "image_0.png" - text: 'A photo of <s0><s1> fashion model wearing ' output: url: "image_1.png" - text: 'A photo of <s0><s1> fashion model wearing ' output: url: "image_2.png" - text: 'A photo of <s0><s1> fashion model wearing ' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A photo of <s0><s1> fashion model wearing license: openrail++ --- # SDXL LoRA DreamBooth - Usman1921/suit-style-fine-tune-sdxl-lora-50-images-own-caption <Gallery /> ## Model description ### These are Usman1921/suit-style-fine-tune-sdxl-lora-50-images-own-caption LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`suit-style-fine-tune-sdxl-lora-50-images-own-caption.safetensors` here 💾](/Usman1921/suit-style-fine-tune-sdxl-lora-50-images-own-caption/blob/main/suit-style-fine-tune-sdxl-lora-50-images-own-caption.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:suit-style-fine-tune-sdxl-lora-50-images-own-caption:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`suit-style-fine-tune-sdxl-lora-50-images-own-caption_emb.safetensors` here 💾](/Usman1921/suit-style-fine-tune-sdxl-lora-50-images-own-caption/blob/main/suit-style-fine-tune-sdxl-lora-50-images-own-caption_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `suit-style-fine-tune-sdxl-lora-50-images-own-caption_emb` to your prompt. For example, `A photo of suit-style-fine-tune-sdxl-lora-50-images-own-caption_emb fashion model wearing` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Usman1921/suit-style-fine-tune-sdxl-lora-50-images-own-caption', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='Usman1921/suit-style-fine-tune-sdxl-lora-50-images-own-caption', filename='suit-style-fine-tune-sdxl-lora-50-images-own-caption_emb.safetensors', repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('A photo of <s0><s1> fashion model wearing ').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/Usman1921/suit-style-fine-tune-sdxl-lora-50-images-own-caption/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
jlemmm/sdxl-multiple-advanced-folder2
jlemmm
2024-02-28T14:53:01Z
1
1
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-02-28T10:57:14Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of wbf shirt, sks pant and oue shoes tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was trained.