modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-28 12:28:24
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-28 12:27:53
card
stringlengths
11
1.01M
Ricky080811/Test2
Ricky080811
2024-03-11T21:36:27Z
0
0
null
[ "safetensors", "autotrain", "text-generation", "conversational", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T21:36:22Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
yulymur/Micha
yulymur
2024-03-11T21:34:17Z
0
0
flair
[ "flair", "text-generation", "ru", "dataset:HuggingFaceTB/cosmopedia", "license:bsl-1.0", "region:us" ]
text-generation
2024-03-11T21:09:53Z
--- license: bsl-1.0 datasets: - HuggingFaceTB/cosmopedia language: - ru metrics: - code_eval library_name: flair pipeline_tag: text-generation ---
graizelle/grle
graizelle
2024-03-11T21:29:13Z
18
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:openrail", "region:us" ]
text-to-image
2024-01-31T05:28:54Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '1girl, grle, looking at viewer, long blonde hair, green eyes, red dress, jewelery, masterpiece best quality, realistic, dramatic lighting' parameter: negative_prompt: >- lowres, bad anatomy, bad hands, text, error, missing fingers, extra digits, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry width=1024, height=1024, guidance_scale=5, num_inference_steps=30 example_title: GrLE output: url: images/grle_20240131051416_e000001_01.png - text: '1girl, grle, looking at viewer, long blonde hair, green eyes, hoodie, shorts, thigh highs, masterpiece best quality, realistic, dramatic lighting' output: url: images/grle_20240131051536_e000009_01.png - text: '1girl, grle, emo, looking at viewer, long blonde hair, basball cap, shorts, masterpiece best quality, realistic, dramatic lighting' base_model: runwayml/stable-diffusion-v1-5 instance_prompt: null license: openrail --- #GrLE <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/graizelle/grle/tree/main) them in the Files & versions tab.
whizzzzkid/Bosbonasusmini12
whizzzzkid
2024-03-11T21:28:07Z
90
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T21:23:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Weni/ZeroShot-3.4.3-Mistral-7b-DPO-1.0.0-merged
Weni
2024-03-11T21:25:37Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T20:53:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
smig/leagaleasy-mistral-7b-instruct-v0.2-v1
smig
2024-03-11T21:24:38Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-03-06T19:13:41Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.2 datasets: - generator model-index: - name: leagaleasy-mistral-7b-instruct-v0.2-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # leagaleasy-mistral-7b-instruct-v0.2-v1 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
SimoneJLaudani/trainer4b
SimoneJLaudani
2024-03-11T21:17:02Z
93
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-11T21:16:35Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: trainer4b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trainer4b This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0036 - Accuracy: 1.0 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:| | 0.0001 | 1.2 | 30 | 0.0022 | 1.0 | 1.0 | | 0.0 | 2.4 | 60 | 0.0023 | 1.0 | 1.0 | | 0.0 | 3.6 | 90 | 0.0016 | 1.0 | 1.0 | | 0.0 | 4.8 | 120 | 0.0036 | 1.0 | 1.0 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
OwOOwO/eacc_adhoc_mtest
OwOOwO
2024-03-11T21:13:50Z
91
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-10T10:27:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OwOOwO/eacc_6_5_1
OwOOwO
2024-03-11T21:11:46Z
90
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-10T20:51:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
enrique2701/cleanrl-ppo-LunarLander-v2-2M
enrique2701
2024-03-11T21:07:37Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2024-03-11T17:28:07Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 143.99 +/- 61.82 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters
denysdios/whisper-med-tr-tuned
denysdios
2024-03-11T21:06:48Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "tr", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-01T10:11:27Z
--- language: - tr license: apache-2.0 base_model: openai/whisper-medium tags: - hf-asr-leaderboard - generated_from_trainer metrics: - wer model-index: - name: Whisper Medium Tr - denysdios results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Medium Tr - denysdios This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 13.0 & Fleurs dataset. It achieves the following results on the evaluation set: - Loss: 0.1618 - Wer: 14.3825 ## Model description The model took about nine hours to train on a single A100 GPU. ## Intended uses & limitations Absolutely no restrictions additional to whisper models. Increasing the Turkish labeled data in whisper, which was 4333/690k (0.0063), was the primary objective. There are just 49.945 hours of data in the fine-tuning dataset, or about 1.1% of the Turkish dataset that has already been trained. ## Training and evaluation data Processing... ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1803 | 0.36 | 1000 | 0.2089 | 18.6326 | | 0.1428 | 0.71 | 2000 | 0.1821 | 16.3912 | | 0.0535 | 1.07 | 3000 | 0.1693 | 14.9132 | | 0.0491 | 1.43 | 4000 | 0.1618 | 14.3825 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
sarak7/H4_312_769_v1
sarak7
2024-03-11T21:02:23Z
181
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T21:00:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sweetfelinity/Reinforce-Pixelcopter-PLE-v0
sweetfelinity
2024-03-11T21:01:22Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-03-11T21:01:19Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 22.90 +/- 13.89 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
ISTA-DASLab/Llama-2-70b-AQLM-2Bit-2x8-hf
ISTA-DASLab
2024-03-11T20:55:32Z
14
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2401.06118", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "aqlm", "region:us" ]
text-generation
2024-02-07T14:48:41Z
--- {} --- Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of `meta-llama/Llama-2-7b-hf`. For this quantization, we used 1 codebook of 16 bits. Selected evaluation results for this and other models: | Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link | |------------|-------------|----------------|----------------|--------------------------------------------------------------------------| | Llama-2-7b | 1x16 | 5.92 | 2.4 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-1x16-hf) | | Llama-2-7b | 2x8 | 6.69 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-2x8-hf) | | Llama-2-7b | 8x8 | 6.61 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-8x8-hf) | | Llama-2-13b| 1x16 | 5.22 | 4.1 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-13b-AQLM-2Bit-1x16-hf)| | Llama-2-70b| 1x16 | 3.83 | 18.8 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-2Bit-1x16-hf)| | Llama-2-70b (THIS) | 2x8 | 4.21 | 18.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-2Bit-2x8-hf) | | Mixtral-8x7b| 1x16 | 3.35 | 12.6 | [Link](https://huggingface.co/ISTA-DASLab/Mixtral-8x7b-AQLM-2Bit-1x16-hf)| | Mixtral-8x7b-Instruct| 1x16 | - | 12.6 | [Link](https://huggingface.co/ISTA-DASLab/Mixtral-8x7B-Instruct-v0_1-AQLM-2Bit-1x16-hf)| To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).
ISTA-DASLab/Llama-2-13b-AQLM-2Bit-1x16-hf
ISTA-DASLab
2024-03-11T20:54:41Z
12
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2401.06118", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "aqlm", "region:us" ]
text-generation
2024-01-31T11:12:34Z
Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of `meta-llama/Llama-2-13b-hf`. For this quantization, we used 1 codebook of 16 bits. Selected evaluation results for this and other models: | Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link | |------------|-------------|----------------|----------------|--------------------------------------------------------------------------| | Llama-2-7b | 1x16 | 5.92 | 2.4 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-1x16-hf) | | Llama-2-7b | 2x8 | 6.69 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-2x8-hf) | | Llama-2-7b | 8x8 | 6.61 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-8x8-hf) | | Llama-2-13b (THIS) | 1x16 | 5.22 | 4.1 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-13b-AQLM-2Bit-1x16-hf)| | Llama-2-70b| 1x16 | 3.83 | 18.8 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-2Bit-1x16-hf)| | Llama-2-70b| 2x8 | 4.21 | 18.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-2Bit-2x8-hf) | | Mixtral-8x7b| 1x16 | 3.35 | 12.6 | [Link](https://huggingface.co/ISTA-DASLab/Mixtral-8x7b-AQLM-2Bit-1x16-hf)| | Mixtral-8x7b-Instruct| 1x16 | - | 12.6 | [Link](https://huggingface.co/ISTA-DASLab/Mixtral-8x7B-Instruct-v0_1-AQLM-2Bit-1x16-hf)| To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).
ISTA-DASLab/Llama-2-7b-AQLM-2Bit-8x8-hf
ISTA-DASLab
2024-03-11T20:54:17Z
12
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2401.06118", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "aqlm", "region:us" ]
text-generation
2024-01-30T17:23:36Z
Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of `meta-llama/Llama-2-7b-hf`. For this quantization, we used 2 codebooks of 8 bits. Selected evaluation results for this and other models: | Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link | |------------|-------------|----------------|----------------|--------------------------------------------------------------------------| | Llama-2-7b | 1x16 | 5.92 | 2.4 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-1x16-hf) | | Llama-2-7b | 2x8 | 6.69 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-2x8-hf) | | Llama-2-7b (THIS) | 8x8 | 6.61 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-8x8-hf) | | Llama-2-13b| 1x16 | 5.22 | 4.1 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-13b-AQLM-2Bit-1x16-hf)| | Llama-2-70b| 1x16 | 3.83 | 18.8 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-2Bit-1x16-hf)| | Llama-2-70b| 2x8 | 4.21 | 18.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-2Bit-2x8-hf) | | Mixtral-8x7b| 1x16 | 3.35 | 12.6 | [Link](https://huggingface.co/ISTA-DASLab/Mixtral-8x7b-AQLM-2Bit-1x16-hf)| | Mixtral-8x7b-Instruct| 1x16 | - | 12.6 | [Link](https://huggingface.co/ISTA-DASLab/Mixtral-8x7B-Instruct-v0_1-AQLM-2Bit-1x16-hf)| To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).
404NotF0und/lunar-llm-phi-2-3epochs
404NotF0und
2024-03-11T20:54:06Z
33
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "autotrain", "custom_code", "dataset:404NotF0und/MtG-json-to-ForgeScript", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-08T13:19:14Z
--- tags: - autotrain - text-generation widget: - text: >- Create the Forge script for this magic card { "name": "Wrench", "mana_cost": "{W}", "type_line": "Artifact— Clue Equipment", "oracle_text": "Equipped creature gets +1/+1 and has vigilance and "{3}, {T}: Tap target creature." {2}, Sacrifice CARD_NAME: Draw a card. Equip {2}'"} license: mit metrics: - accuracy - perplexity datasets: - 404NotF0und/MtG-json-to-ForgeScript --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage - Do some installations first ``` pip install transformers datasets matplotlib pandas git-lfs jiwer tqdm numpy git clone https://huggingface.co/datasets/404NotF0und/MtG-json-to-ForgeScribe ``` The following code are an example of the usage done on a kaggle notebook ```python import torch import random import csv import pandas as pd from transformers import AutoTokenizer, AutoModelForCausalLM from collections.abc import Sequence # Function to read the CSV files and extract the relevant columns def read_dataset(file_path): print(f"Reading dataset from {file_path}") data = [] with open(file_path, encoding="utf-8") as csv_file: csv_reader = csv.DictReader(csv_file) # Use DictReader to handle columns by name for row in csv_reader: json_input = f"{row['instruction']} {row['input']}" # Assuming 'input' column contains the JSON input target_dsl = row["output"] # Assuming 'output' column contains the target DSL data.append((json_input, target_dsl)) return data # Function to load the model and tokenizer from Hugging Face def load_model(model_name, read_token, device): tokenizer = AutoTokenizer.from_pretrained(model_name, token=read_token) model = AutoModelForCausalLM.from_pretrained(model_name, token=read_token) return tokenizer, model # Function to run inference (text generation) def run_inference(model, tokenizer, prompt, max_length=300): # Encode the prompt text input_ids = tokenizer.encode(prompt, return_tensors='pt') # Generate text using the model output_sequences = model.generate( input_ids=input_ids, max_length=max_length, temperature=0.5, top_k=50, top_p=0.95, pad_token_id=tokenizer.eos_token_id, do_sample=True ) # Decode the generated text generated_text = tokenizer.decode(output_sequences[0], skip_special_tokens=True) print(generated_text.split('###')[1]) return generated_text.split('###')[1] ``` ```python read_token = 'hf_YOUR_TOKEN' device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name = '404NotF0und/lunar-llm-phi-2-3epoch' # Load the datasets validation_path = f"MtG-json-to-ForgeScribe/compiled_cards_data_validation.csv" test_path = f"MtG-json-to-ForgeScribe/compiled_cards_data_test.csv" train_path = f"MtG-json-to-ForgeScribe/compiled_cards_data_train.csv" # Read the datasets validation_data = read_dataset(validation_path) test_data = read_dataset(test_path) train_data = read_dataset(test_path) ``` ```python def get_random_prompts(dataset, num_samples=3): if not isinstance(dataset, Sequence): dataset = list(dataset) if len(dataset) < num_samples: raise ValueError(f"Dataset does not have enough elements to sample {num_samples} items.") random_elements = random.sample(dataset, num_samples) # Create a list of dictionaries with 'json_input' and 'max_length' for each selected element prompts = [ { 'json_input': element[0], 'max_length': len(f"{element[0]}\n### Response: {element[1]}") # Calculate the length of the response } for element in random_elements ] return prompts # Now you can populate the prompts variable with 6 random elements from each dataset try: prompts = [ { 'json_input': "Create the Forge script for this magic card { \"name\": \"Wrench\", \"mana_cost\": \"{W}\", \"type_line\": \"Artifact\u2014 Clue Equipment\", \"oracle_text\": \"Equipped creature gets +1/+1 and has vigilance and \"{3}, {T}: Tap target creature.\"\n{2}, Sacrifice CARD_NAME: Draw a card.\nEquip {2}'\"}", 'max_length': 100 } ] except ValueError as e: print(e) ``` ```python # Load the model and tokenizer tokenizer, model = load_model(model_name, read_token, device) for prompt in prompts: print(f"### Question: {prompt['json_input']} \n") print("\n" + "-"*80 + "\n") # Run inference (text generation) generated_text = run_inference(model, tokenizer, prompt['json_input']) # Print the generated text # print(generated_text) print("\n" + "="*80 + "\n") # Separator for readability ``` Lastly this is the example of output you should get ``` ### Question: Create the Forge script for this magic card { "name": "Wrench", "mana_cost": "{W}", "type_line": "Artifact— Clue Equipment", "oracle_text": "Equipped creature gets +1/+1 and has vigilance and "{3}, {T}: Tap target creature." {2}, Sacrifice CARD_NAME: Draw a card. Equip {2}'"} -------------------------------------------------------------------------------- Response: Name:Wrench\nManaCost:W\nTypes:Artifact Clue Equipment\nK:Equip:2\nS:Mode$ Continuous | Affected$ Creature.EquippedBy | AddPower$ 1 | AddToughness$ 1 | AddKeyword$ Vigilance | AddAbility$ TrigTap | Description$ Equipped creature gets +1/+1 and has vigilance and "{3}, {T}: Tap target creature."\nSVar:TrigTap:AB$ Tap | Cost$ 3 T | ValidTgts$ Creature | TgtPrompt$ Select target creature | SpellDescription$ Tap target creature.\nA:AB$ Draw | Cost$ 2 Sac<1/CARDNAME> | NumCards$ 1 | SpellDescription$ Draw a card.\nOracle:Equipped creature gets +1/+1 and has vigilance and "{3}, {T}: Tap target creature."\n{2}, Sacrifice Wrench: Draw ================================================================================ ```
ISTA-DASLab/Llama-2-7b-AQLM-2Bit-2x8-hf
ISTA-DASLab
2024-03-11T20:53:48Z
31
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2401.06118", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "aqlm", "region:us" ]
text-generation
2024-01-30T10:37:07Z
Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of `meta-llama/Llama-2-7b-hf`. For this quantization, we used 2 codebooks of 8 bits. Selected evaluation results for this and other models: | Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link | |------------|-------------|----------------|----------------|--------------------------------------------------------------------------| | Llama-2-7b | 1x16 | 5.92 | 2.4 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-1x16-hf) | | Llama-2-7b (THIS) | 2x8 | 6.69 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-2x8-hf) | | Llama-2-7b | 8x8 | 6.61 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-8x8-hf) | | Llama-2-13b| 1x16 | 5.22 | 4.1 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-13b-AQLM-2Bit-1x16-hf)| | Llama-2-70b| 1x16 | 3.83 | 18.8 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-2Bit-1x16-hf)| | Llama-2-70b| 2x8 | 4.21 | 18.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-2Bit-2x8-hf) | | Mixtral-8x7b| 1x16 | 3.35 | 12.6 | [Link](https://huggingface.co/ISTA-DASLab/Mixtral-8x7b-AQLM-2Bit-1x16-hf)| | Mixtral-8x7b-Instruct| 1x16 | - | 12.6 | [Link](https://huggingface.co/ISTA-DASLab/Mixtral-8x7B-Instruct-v0_1-AQLM-2Bit-1x16-hf)| **UPD** (20.02.2024). We applied global finetuning on top of quantized model and improved results compared to first revision. To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).
ISTA-DASLab/Mixtral-8x7b-AQLM-2Bit-1x16-hf
ISTA-DASLab
2024-03-11T20:52:35Z
44
23
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "arxiv:2401.06118", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "aqlm", "region:us" ]
text-generation
2024-02-08T12:40:49Z
Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of `mistralai/Mixtral-8x7B-v0.1`. For this quantization, we used 1 codebook of 16 bits. Selected evaluation results for this and other models: | Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link | |------------|-------------|----------------|----------------|--------------------------------------------------------------------------| | Llama-2-7b | 1x16 | 5.92 | 2.4 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-1x16-hf) | | Llama-2-7b | 2x8 | 6.69 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-2x8-hf) | | Llama-2-7b | 8x8 | 6.61 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-8x8-hf) | | Llama-2-13b| 1x16 | 5.22 | 4.1 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-13b-AQLM-2Bit-1x16-hf)| | Llama-2-70b| 1x16 | 3.83 | 18.8 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-2Bit-1x16-hf)| | Llama-2-70b| 2x8 | 4.21 | 18.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-2Bit-2x8-hf) | | Mixtral-8x7b (THIS) | 1x16 | 3.35 | 12.6 | [Link](https://huggingface.co/ISTA-DASLab/Mixtral-8x7b-AQLM-2Bit-1x16-hf)| | Mixtral-8x7b-Instruct| 1x16 | - | 12.6 | [Link](https://huggingface.co/ISTA-DASLab/Mixtral-8x7B-Instruct-v0_1-AQLM-2Bit-1x16-hf)| To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).
Weni/ZeroShot-3.4.3-Mistral-7b-DPO-1.0.0
Weni
2024-03-11T20:50:47Z
0
0
trl
[ "trl", "safetensors", "DPO", "ZeroShot", "en", "es", "pt", "base_model:Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged", "base_model:finetune:Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged", "license:mit", "region:us" ]
null
2024-03-11T20:06:32Z
--- license: mit library_name: "trl" tags: - DPO - ZeroShot base_model: Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged model-index: - name: Weni/ZeroShot-3.4.3-Mistral-7b-DPO-1.0.0 results: [] language: ['en', 'es', 'pt'] --- # Weni/ZeroShot-3.4.3-Mistral-7b-DPO-1.0.0 This model is a fine-tuned version of [Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged] on the dataset Weni/zeroshot-dpo-1.0.0 with the DPO trainer. It is part of the ZeroShot project for [Weni](https://weni.ai/). It achieves the following results on the evaluation set: {'eval_loss': 0.20482958853244781, 'eval_runtime': 23.1466, 'eval_samples_per_second': 2.635, 'eval_steps_per_second': 0.346, 'eval_rewards/chosen': 0.035770073533058167, 'eval_rewards/rejected': -7.72912073135376, 'eval_rewards/accuracies': 0.9375, 'eval_rewards/margins': 7.764890670776367, 'eval_logps/rejected': -91.3374252319336, 'eval_logps/chosen': -15.580533981323242, 'eval_logits/rejected': -0.613441526889801, 'eval_logits/chosen': -0.7170205116271973, 'epoch': 5.65} ## Intended uses & limitations This model has not been trained to avoid specific intructions. ## Training procedure Finetuning was done on the model Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged with the following prompt: ``` Portuguese: [INST] Você é muito especialista em classificar a frase do usuário em um chatbot sobre: {context} Pare, pense bem e responda com APENAS UM ÚNICO \`id\` da classe que melhor represente a intenção para a frase do usuário de acordo com a análise de seu contexto, responda APENAS com o \`id\` da classe só se você tiver muita certeza e não explique o motivo. Na ausência, falta de informações ou caso a frase do usuário não se enquadre em nenhuma classe, classifique como "-1". # Essas são as Classes com seus Id e Contexto: {all_classes} # Frase do usuário: {input} # Id da Classe: [/INST] Spanish: [INST] Eres muy experto en clasificar la frase del usuario en un chatbot sobre: {context} Deténgase, piense bien y responda con SOLO UN ÚNICO \`id\` de la clase que mejor represente la intención para la frase del usuario de acuerdo con el análisis de su contexto, responda SOLO con el \`id\` de la clase si está muy seguro y no explique el motivo. En ausencia, falta de información o en caso de que la frase del usuario no se ajuste a ninguna clase, clasifique como "-1". # Estas son las Clases con sus Id y Contexto: {all_classes} # Frase del usuario: {input} # Id de la Clase: [/INST] English: [INST] You are very expert in classifying the user sentence in a chatbot about: {context} Stop, think carefully, and respond with ONLY ONE SINGLE \`id\` of the class that best represents the intention for the user's sentence according to the analysis of its context, respond ONLY with the \`id\` of the class if you are very sure and do not explain the reason. In the absence, lack of information, or if the user's sentence does not fit into any class, classify as "-1". # These are the Classes and its Context: {all_classes} # User's sentence: {input} # Class Id: [/INST] Chosen_response: {chosen_response} Rejected_response: {rejected_response} ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - per_device_train_batch_size: 8 - per_device_eval_batch_size: 8 - gradient_accumulation_steps: 4 - num_gpus: 1 - total_train_batch_size: 32 - optimizer: AdamW - lr_scheduler_type: cosine - num_steps: 96 - quantization_type: bitsandbytes - LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 8\n - lora_alpha: 16\n - lora_dropout: 0.1\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']\n - task_type: CAUSAL_LM",) ### Training results ### Framework versions - transformers==4.38.2 - datasets==2.17.1 - peft==0.8.2 - safetensors==0.4.2 - evaluate==0.4.1 - bitsandbytes==0.42 - huggingface_hub==0.20.3 - seqeval==1.2.2 - optimum==1.17.1 - auto-gptq==0.7.0 - gpustat==1.1.1 - deepspeed==0.13.2 - wandb==0.16.3 - trl==0.7.11 - accelerate==0.27.2 - coloredlogs==15.0.1 - traitlets==5.14.1 - autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.0/autoawq-0.2.0+cu118-cp310-cp310-linux_x86_64.whl ### Hardware - Cloud provided: runpod.io
SimoneJLaudani/trainer3b
SimoneJLaudani
2024-03-11T20:48:55Z
93
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-11T20:48:26Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: trainer3b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trainer3b This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0025 - Accuracy: 1.0 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:| | 0.0013 | 1.2 | 30 | 0.0049 | 1.0 | 1.0 | | 0.0003 | 2.4 | 60 | 0.0025 | 1.0 | 1.0 | | 0.0002 | 3.6 | 90 | 0.0020 | 1.0 | 1.0 | | 0.0001 | 4.8 | 120 | 0.0038 | 1.0 | 1.0 | | 0.0001 | 6.0 | 150 | 0.0031 | 1.0 | 1.0 | | 0.0001 | 7.2 | 180 | 0.0027 | 1.0 | 1.0 | | 0.0001 | 8.4 | 210 | 0.0025 | 1.0 | 1.0 | | 0.0001 | 9.6 | 240 | 0.0025 | 1.0 | 1.0 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
samiksharasaikar/travel-xzg
samiksharasaikar
2024-03-11T20:48:13Z
3
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-03-11T20:44:13Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### Travel-XZG Dreambooth model trained by samiksharasaikar following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: I21-15 Sample pictures of this concept: ![0](https://huggingface.co/samiksharasaikar/travel-xzg/resolve/main/sample_images/809029_a_traveller_on_the_top_of_the_mountains_with_India_xl-1024-v1-0.png) ![1](https://huggingface.co/samiksharasaikar/travel-xzg/resolve/main/sample_images/809031_a_traveller_on_the_top_of_the_mountains_with_India_xl-1024-v1-0.png) ![2](https://huggingface.co/samiksharasaikar/travel-xzg/resolve/main/sample_images/809030_a_traveller_on_the_top_of_the_mountains_with_India_xl-1024-v1-0.png) ![3](https://huggingface.co/samiksharasaikar/travel-xzg/resolve/main/sample_images/809028_a_traveller_on_the_top_of_the_mountains_with_India_xl-1024-v1-0.png)
tolgadev/TrendyolMixLLM_v1.1-ties
tolgadev
2024-03-11T20:47:17Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0", "Trendyol/Trendyol-LLM-7b-chat-v0.1", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T20:42:19Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0 - Trendyol/Trendyol-LLM-7b-chat-v0.1 --- # TrendyolMixLLM_v1.1-ties TrendyolMixLLM_v1.1-ties is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0) * [Trendyol/Trendyol-LLM-7b-chat-v0.1](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v0.1) ## 🧩 Configuration ```yaml models: - model: Trendyol/Trendyol-LLM-7b-chat-v1.0 # no parameters necessary for base model - model: Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0 parameters: density: 0.5 weight: 0.5 - model: Trendyol/Trendyol-LLM-7b-chat-v0.1 parameters: density: 0.5 weight: 0.3 merge_method: ties base_model: Trendyol/Trendyol-LLM-7b-chat-v1.0 parameters: normalize: true dtype: float16 ```
tolgadev/Trendyol-LLM-7b-chat-dpo-v1.0-GGUF
tolgadev
2024-03-11T20:45:41Z
56
2
transformers
[ "transformers", "gguf", "trendyol", "llama-2", "turkish", "text-generation", "tr", "en", "base_model:Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0", "base_model:quantized:Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0", "license:apache-2.0", "region:us", "conversational" ]
text-generation
2024-03-11T17:55:28Z
--- model_name: Trendyol-LLM-7b-chat-dpo-v1.0-gguf model_creator: Trendyol base_model: Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0 language: - tr - en pipeline_tag: text-generation license: apache-2.0 model_type: llama library_name: transformers inference: false tags: - trendyol - llama-2 - turkish quantized_by: tolgadev --- ## Trendyol-LLM-7b-chat-dpo-v1.0 models ---- ## Description This repo contains all types of GGUF formatted model files for [Trendyol-LLM-7b-chat-dpo-v1.0](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0). <img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0/resolve/main/trendyol-llm-mistral.jpg" alt="drawing" width="400"/> ## Quantized LLM models and methods | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [Trendyol-LLM-7b-chat-dpo-v1.0.Q2_K.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-dpo-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-dpo-v1.0.Q2_K.gguf) | Q2_K | 2 | 2.59 GB| 4.88 GB | smallest, significant quality loss - not recommended for most purposes | | [Trendyol-LLM-7b-chat-dpo-v1.0.Q3_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-dpo-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-dpo-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 3.01 GB| 5.56 GB | very small, high quality loss | | [Trendyol-LLM-7b-chat-dpo-v1.0.Q3_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-dpo-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-dpo-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 3.36 GB| 5.91 GB | very small, high quality loss | | [Trendyol-LLM-7b-chat-dpo-v1.0.Q3_K_L.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-dpo-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-dpo-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 3.66 GB| 6.20 GB | small, substantial quality loss | | [Trendyol-LLM-7b-chat-dpo-v1.0.Q4_0.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-dpo-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-dpo-v1.0.Q4_0.gguf) | Q4_0 | 4 | 3.9 GB| 6.45 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Trendyol-LLM-7b-chat-dpo-v1.0.Q4_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-dpo-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-dpo-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 3.93 GB| 6.48 GB | small, greater quality loss | | [Trendyol-LLM-7b-chat-dpo-v1.0.Q4_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-dpo-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-dpo-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 4.15 GB| 6.69 GB | medium, balanced quality - recommended | | [Trendyol-LLM-7b-chat-dpo-v1.0.Q5_0.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-dpo-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-dpo-v1.0.Q5_0.gguf) | Q5_0 | 5 | 4.73 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [Trendyol-LLM-7b-chat-dpo-v1.0.Q5_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-dpo-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-dpo-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 4.75 GB| 7.27 GB | large, low quality loss - recommended | | [Trendyol-LLM-7b-chat-dpo-v1.0.Q5_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-dpo-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-dpo-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 4.86 GB| 7.40 GB | large, very low quality loss - recommended | | [Trendyol-LLM-7b-chat-dpo-v1.0.Q6_K.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-dpo-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-dpo-v1.0.Q6_K.gguf) | Q6_K | 6 | 5.61 GB| 8.15 GB | very large, extremely low quality loss | The names of the quantization methods follow the naming convention: "q" + the number of bits + the variant used (detailed below). Here is a list of all the models and their corresponding use cases, based on model cards made by [TheBloke](https://huggingface.co/TheBloke/): * `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors. * `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_s`: Uses Q3_K for all tensors * `q4_0`: Original quant method, 4-bit. * `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. * `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * `q4_k_s`: Uses Q4_K for all tensors * `q5_0`: Higher accuracy, higher resource usage and slower inference. * `q5_1`: Even higher accuracy, resource usage and slower inference. * `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * `q5_k_s`: Uses Q5_K for all tensors * `q6_k`: Uses Q8_K for all tensors **TheBloke recommends using Q5_K_M** as it preserves most of the model's performance. Alternatively, you can use Q4_K_M if you want to save some memory. In general, K_M versions are better than K_S versions. ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ## Special thanks to [TheBloke on Huggingface](https://huggingface.co/TheBloke) and [Maxime Labonne on Github](https://github.com/mlabonne/llm-course) ----- # **Trendyol LLM v1.0 - DPO** Trendyol LLM v1.0 - DPO is a generative model that is based on Mistral 7B model. DPO training was applied. This is the repository for the chat model. ## Model Details **Model Developers** Trendyol **Variations** [base](https://huggingface.co/Trendyol/Trendyol-LLM-7b-base-v1.0), [chat](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.0), and dpo variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Trendyol LLM is an auto-regressive language model (based on Mistral 7b) that uses an optimized transformer architecture. Huggingface TRL lib was used for training. The DPO version is fine-tuned on 11K sets (prompt-chosen-reject) with the following trainables by using LoRA: - **lr**=5e-6 - **lora_rank**=64 - **lora_alpha**=128 - **lora_trainable**=q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj - **lora_dropout**=0.05 - **bf16**=True - **beta**=0.01 - **max_length**= 1024 - **max_prompt_length**= 512 - **lr_scheduler_type**= cosine - **torch_dtype**= bfloat16 <img src="https://camo.githubusercontent.com/3e61ca080778f62988b459c7321726fa35bb3776ceb07ecaabf71ebca44f95a7/68747470733a2f2f68756767696e67666163652e636f2f64617461736574732f74726c2d696e7465726e616c2d74657374696e672f6578616d706c652d696d616765732f7265736f6c76652f6d61696e2f696d616765732f74726c5f62616e6e65725f6461726b2e706e67" alt="drawing" width="600"/> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png" alt="drawing" width="600"/> ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_id = "Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto', load_in_8bit=True) sampling_params = dict(do_sample=True, temperature=0.3, top_k=50, top_p=0.9) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map="auto", max_new_tokens=1024, return_full_text=True, repetition_penalty=1.1 ) DEFAULT_SYSTEM_PROMPT = "Sen yardımcı bir asistansın ve sana verilen talimatlar doğrultusunda en iyi cevabı üretmeye çalışacaksın.\n" TEMPLATE = ( "[INST] {system_prompt}\n\n" "{instruction} [/INST]" ) def generate_prompt(instruction, system_prompt=DEFAULT_SYSTEM_PROMPT): return TEMPLATE.format_map({'instruction': instruction,'system_prompt': system_prompt}) def generate_output(user_query, sys_prompt=DEFAULT_SYSTEM_PROMPT): prompt = generate_prompt(user_query, sys_prompt) outputs = pipe(prompt, **sampling_params ) return outputs[0]["generated_text"].split("[/INST]")[-1] user_query = "Türkiye'de kaç il var?" response = generate_output(user_query) print(response) ``` with chat template: ```python pipe = pipeline("conversational", model=model, tokenizer=tokenizer, device_map="auto", max_new_tokens=1024, repetition_penalty=1.1 ) messages = [ {"role": "user", "content": "Türkiye'de kaç il var?"} ] outputs = pipe(messages, **sampling_params) print(outputs) ``` ## Limitations, Risks, Bias, and Ethical Considerations ### Limitations and Known Biases - **Primary Function and Application:** Trendyol LLM, an autoregressive language model, is primarily designed to predict the next token in a text string. While often used for various applications, it is important to note that it has not undergone extensive real-world application testing. Its effectiveness and reliability across diverse scenarios remain largely unverified. - **Language Comprehension and Generation:** The model is primarily trained in standard English and Turkish. Its performance in understanding and generating slang, informal language, or other languages may be limited, leading to potential errors or misinterpretations. - **Generation of False Information:** Users should be aware that Trendyol LLM may produce inaccurate or misleading information. Outputs should be considered as starting points or suggestions rather than definitive answers. ### Risks and Ethical Considerations - **Potential for Harmful Use:** There is a risk that Trendyol LLM could be used to generate offensive or harmful language. We strongly discourage its use for any such purposes and emphasize the need for application-specific safety and fairness evaluations before deployment. - **Unintended Content and Bias:** The model was trained on a large corpus of text data, which was not explicitly checked for offensive content or existing biases. Consequently, it may inadvertently produce content that reflects these biases or inaccuracies. - **Toxicity:** Despite efforts to select appropriate training data, the model is capable of generating harmful content, especially when prompted explicitly. We encourage the open-source community to engage in developing strategies to minimize such risks. ### Recommendations for Safe and Ethical Usage - **Human Oversight:** We recommend incorporating a human curation layer or using filters to manage and improve the quality of outputs, especially in public-facing applications. This approach can help mitigate the risk of generating objectionable content unexpectedly. - **Application-Specific Testing:** Developers intending to use Trendyol LLM should conduct thorough safety testing and optimization tailored to their specific applications. This is crucial, as the model’s responses can be unpredictable and may occasionally be biased, inaccurate, or offensive. - **Responsible Development and Deployment:** It is the responsibility of developers and users of Trendyol LLM to ensure its ethical and safe application. We urge users to be mindful of the model's limitations and to employ appropriate safeguards to prevent misuse or harmful consequences.
mariogemoll/bppc-vit
mariogemoll
2024-03-11T20:44:14Z
180
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-03-11T12:21:52Z
--- library_name: transformers tags: [] --- # Body progress pic classifier A visual transformer model based on [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) to detect if a gym progress pic was taken from the front, from the back, from the left side or from the right side. There is a [demo](https://huggingface.co/spaces/mariogemoll/bppc), but really this is just my first dummy project using the transformers library.
cnmoro/t5-small-named-entity-recognition
cnmoro
2024-03-11T20:41:24Z
117
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-11T04:14:50Z
--- language: - en widget: - text: "Emma Stone looked genuinely shocked when her name was announced as the best actress winner earlier. “I think I blacked out! I was very shocked,” she says backstage. “I still feel like I’m spinning a little bit. It’s a huge honour and I’m very surprised.” Having experienced a bit of a wardrobe malfunction, she reassures us that all is now well. “They sewed me back in! I genuinely think I busted it during I’m Just Ken! I was so amazed by Ryan Gosling and that number just blew my mind. I was just going for it and things just happen.” She said she learned a lot from playing Bella Baxter in Yorgos Lanthimos’s film." --- Finetuned on 6.6 million pairs of sentences and named entities.
nop1006/gte-base-zh-finetuned-emotion
nop1006
2024-03-11T20:38:38Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:thenlper/gte-base-zh", "base_model:finetune:thenlper/gte-base-zh", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-07T15:16:58Z
--- license: mit base_model: thenlper/gte-base-zh tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: gte-base-zh-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gte-base-zh-finetuned-emotion This model is a fine-tuned version of [thenlper/gte-base-zh](https://huggingface.co/thenlper/gte-base-zh) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3958 - Accuracy: 0.8272 - F1: 0.8189 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4103 | 1.0 | 570 | 0.3675 | 0.8333 | 0.8271 | | 0.3452 | 2.0 | 1140 | 0.3796 | 0.8290 | 0.8180 | | 0.2784 | 3.0 | 1710 | 0.3930 | 0.8397 | 0.8346 | | 0.1904 | 4.0 | 2280 | 0.5113 | 0.8364 | 0.8301 | | 0.1239 | 5.0 | 2850 | 0.6590 | 0.8232 | 0.8100 | | 0.0828 | 6.0 | 3420 | 0.8153 | 0.8254 | 0.8241 | | 0.0624 | 7.0 | 3990 | 0.8672 | 0.8250 | 0.8210 | | 0.0413 | 8.0 | 4560 | 0.9244 | 0.8255 | 0.8159 | | 0.0303 | 9.0 | 5130 | 1.0888 | 0.8199 | 0.8068 | | 0.0233 | 10.0 | 5700 | 1.1171 | 0.8250 | 0.8194 | | 0.0159 | 11.0 | 6270 | 1.2642 | 0.8241 | 0.8115 | | 0.009 | 12.0 | 6840 | 1.2930 | 0.8265 | 0.8169 | | 0.0056 | 13.0 | 7410 | 1.3720 | 0.8260 | 0.8150 | | 0.0019 | 14.0 | 7980 | 1.3878 | 0.8255 | 0.8168 | | 0.003 | 15.0 | 8550 | 1.3958 | 0.8272 | 0.8189 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
rumeysacelik/turkishReviews-ds-commerce
rumeysacelik
2024-03-11T20:34:00Z
0
1
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-11T20:33:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Litzy619/V0309P8
Litzy619
2024-03-11T20:32:59Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:finetune:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-03-11T04:34:35Z
--- license: mit base_model: microsoft/phi-2 tags: - generated_from_trainer model-index: - name: V0309P8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # V0309P8 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 20 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1598 | 0.09 | 10 | 1.0083 | | 0.4064 | 0.17 | 20 | 0.1257 | | 0.1215 | 0.26 | 30 | 0.0774 | | 0.1055 | 0.34 | 40 | 0.0736 | | 0.0962 | 0.43 | 50 | 0.0642 | | 0.0853 | 0.51 | 60 | 0.0657 | | 0.0804 | 0.6 | 70 | 0.0616 | | 0.0843 | 0.68 | 80 | 0.0628 | | 0.0729 | 0.77 | 90 | 0.0615 | | 0.0704 | 0.85 | 100 | 0.0609 | | 0.0761 | 0.94 | 110 | 0.0601 | | 0.0721 | 1.02 | 120 | 0.0648 | | 0.0697 | 1.11 | 130 | 0.0638 | | 0.0654 | 1.19 | 140 | 0.0620 | | 0.0618 | 1.28 | 150 | 0.0608 | | 0.0632 | 1.37 | 160 | 0.0648 | | 0.0627 | 1.45 | 170 | 0.0636 | | 0.0584 | 1.54 | 180 | 0.0622 | | 0.0621 | 1.62 | 190 | 0.0604 | | 0.0615 | 1.71 | 200 | 0.0625 | | 0.0625 | 1.79 | 210 | 0.0594 | | 0.0606 | 1.88 | 220 | 0.0651 | | 0.0556 | 1.96 | 230 | 0.0609 | | 0.0544 | 2.05 | 240 | 0.0641 | | 0.0462 | 2.13 | 250 | 0.0659 | | 0.0468 | 2.22 | 260 | 0.0695 | | 0.043 | 2.3 | 270 | 0.0711 | | 0.0523 | 2.39 | 280 | 0.0665 | | 0.051 | 2.47 | 290 | 0.0643 | | 0.0502 | 2.56 | 300 | 0.0647 | | 0.0509 | 2.65 | 310 | 0.0661 | | 0.0434 | 2.73 | 320 | 0.0677 | | 0.0452 | 2.82 | 330 | 0.0682 | | 0.0444 | 2.9 | 340 | 0.0686 | | 0.0477 | 2.99 | 350 | 0.0688 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
6001k1d/dqn-SpaceInvadersNoFrameskip-v4
6001k1d
2024-03-11T20:28:51Z
1
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-16T15:51:34Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 579.50 +/- 324.58 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga 6001k1d -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga 6001k1d -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga 6001k1d ``` ## Hyperparameters ```python OrderedDict([('batch_size', 128), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
tsavage68/mistralit2_1000_STEPS_5e7_rate_03_beta_DPO
tsavage68
2024-03-11T20:21:55Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T09:38:21Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.2 tags: - trl - dpo - generated_from_trainer model-index: - name: mistralit2_1000_STEPS_5e7_rate_03_beta_DPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistralit2_1000_STEPS_5e7_rate_03_beta_DPO This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0554 - Rewards/chosen: -4.6458 - Rewards/rejected: -7.9897 - Rewards/accuracies: 0.6593 - Rewards/margins: 3.3439 - Logps/rejected: -55.2048 - Logps/chosen: -38.8718 - Logits/rejected: -2.6256 - Logits/chosen: -2.6266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6053 | 0.1 | 50 | 0.6740 | -0.3080 | -0.4763 | 0.5429 | 0.1682 | -30.1599 | -24.4126 | -2.8583 | -2.8587 | | 0.6455 | 0.2 | 100 | 0.6888 | -1.9296 | -2.8473 | 0.6110 | 0.9176 | -38.0633 | -29.8180 | -2.7011 | -2.7016 | | 0.6646 | 0.29 | 150 | 0.8842 | -4.4677 | -5.7716 | 0.5956 | 1.3039 | -47.8112 | -38.2782 | -2.7068 | -2.7076 | | 0.8576 | 0.39 | 200 | 0.8269 | 0.2095 | -0.3290 | 0.5341 | 0.5385 | -29.6690 | -22.6876 | -2.8074 | -2.8077 | | 0.9282 | 0.49 | 250 | 0.8715 | -3.3030 | -4.1864 | 0.5758 | 0.8834 | -42.5272 | -34.3958 | -2.8320 | -2.8326 | | 0.883 | 0.59 | 300 | 0.8491 | -1.6930 | -2.7293 | 0.5846 | 1.0364 | -37.6702 | -29.0290 | -2.8023 | -2.8028 | | 0.7641 | 0.68 | 350 | 0.8305 | -0.5284 | -1.4934 | 0.5868 | 0.9650 | -33.5504 | -25.1471 | -2.8008 | -2.8013 | | 0.8485 | 0.78 | 400 | 0.8168 | -1.8042 | -3.2662 | 0.6286 | 1.4620 | -39.4597 | -29.3999 | -2.8978 | -2.8983 | | 0.6637 | 0.88 | 450 | 0.9089 | -4.1779 | -5.6349 | 0.6220 | 1.4570 | -47.3556 | -37.3123 | -2.7996 | -2.8003 | | 0.8293 | 0.98 | 500 | 0.7790 | -1.7260 | -3.1768 | 0.6242 | 1.4508 | -39.1617 | -29.1392 | -2.7937 | -2.7943 | | 0.1061 | 1.07 | 550 | 0.8642 | -2.6748 | -4.9677 | 0.6659 | 2.2929 | -45.1314 | -32.3019 | -2.7609 | -2.7616 | | 0.1183 | 1.17 | 600 | 1.0052 | -4.2792 | -7.1691 | 0.6527 | 2.8899 | -52.4695 | -37.6498 | -2.6760 | -2.6769 | | 0.3423 | 1.27 | 650 | 1.0032 | -4.1972 | -7.1444 | 0.6571 | 2.9472 | -52.3871 | -37.3765 | -2.6563 | -2.6572 | | 0.3015 | 1.37 | 700 | 1.0111 | -4.0263 | -7.1542 | 0.6549 | 3.1280 | -52.4198 | -36.8067 | -2.6518 | -2.6526 | | 0.0814 | 1.46 | 750 | 1.0416 | -4.3351 | -7.5972 | 0.6484 | 3.2621 | -53.8964 | -37.8360 | -2.6335 | -2.6344 | | 0.1279 | 1.56 | 800 | 1.0511 | -4.6097 | -7.9321 | 0.6505 | 3.3224 | -55.0127 | -38.7514 | -2.6277 | -2.6287 | | 0.1507 | 1.66 | 850 | 1.0478 | -4.6393 | -7.9834 | 0.6484 | 3.3441 | -55.1838 | -38.8501 | -2.6262 | -2.6272 | | 0.2148 | 1.76 | 900 | 1.0515 | -4.6439 | -7.9924 | 0.6527 | 3.3485 | -55.2139 | -38.8655 | -2.6260 | -2.6270 | | 0.2291 | 1.86 | 950 | 1.0554 | -4.6452 | -7.9877 | 0.6505 | 3.3425 | -55.1980 | -38.8697 | -2.6257 | -2.6267 | | 0.13 | 1.95 | 1000 | 1.0554 | -4.6458 | -7.9897 | 0.6593 | 3.3439 | -55.2048 | -38.8718 | -2.6256 | -2.6266 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.0.0+cu117 - Datasets 2.18.0 - Tokenizers 0.15.2
Villian7/01Coder
Villian7
2024-03-11T20:19:29Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "code-llm", "mistral-7b", "language-model", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T15:47:30Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards tags: - code-llm - mistral-7b - language-model --- # Model Card for 01Coder 7B This model card provides details about a code language model (LLM) based on Mistral 7B architecture. It has been trained on a combination of three datasets: ise-uiuc/Magicoder-OSS-Instruct-75K, HuggingFaceH4/CodeAlpaca_20K, and theblackcat102/evol-codealpaca-v1. ## Model Details ### Model Description This model is a language model fine-tuned for code generation tasks, leveraging the Mistral 7B base model architecture. It has been trained on a combination of three datasets, namely Magicoder-OSS-Instruct-75K, CodeAlpaca_20K, and evol-codealpaca-v1. The model aims to assist developers in generating code snippets for various programming tasks, ranging from natural language instructions to specific coding prompts. - **Developed by:** Manoj Athreya A - **Model type:** Language model (LLM) - **License:** [Apache 2.0 License] - **Finetuned from model:** Mistral 7B ## Intended Uses - Code generation from natural language prompts. - Assisting developers in completing code snippets. - Augmenting code-related tasks with automated generation capabilities. ## Limitations and Ethical Considerations - **Bias:** As with any language model, biases present in the training data may manifest in the generated code snippets. - **Accuracy:** While the model aims to generate accurate code, it may occasionally produce incorrect or suboptimal solutions, especially for complex tasks. - **Security:** Generated code should be reviewed for security vulnerabilities, as the model may inadvertently produce insecure implementations. - **Ethical Use:** Users are encouraged to employ the model responsibly and ethically, avoiding harmful or malicious use cases. ### Recommendations - Fine-tuning the model on specific domains or tasks may improve its performance. - Validate generated code in real-world scenarios to ensure its correctness and reliability. - Provide feedback to continuously improve the model's performance and address any issues encountered during usage. ## License - The source code in this repo is licensed under the Apache 2.0 license. ## Version History - 01-Coder-7Bv0.1
YASHIKAaa/bert2
YASHIKAaa
2024-03-11T20:17:12Z
194
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-11T20:16:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jackshannon/phi-1_5-finetuned-question-generation-merged
jackshannon
2024-03-11T20:16:20Z
34
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T20:12:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RunDiffusion/Juggernaut-XL-Lightning
RunDiffusion
2024-03-11T20:09:52Z
11,409
48
diffusers
[ "diffusers", "art", "people", "diffusion", "Cinematic", "Photography", "Landscape", "Interior", "Food", "Car", "Wildlife", "Architecture", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-02-23T20:45:29Z
--- language: - en license: creativeml-openrail-m library_name: diffusers tags: - art - people - diffusion - Cinematic - Photography - Landscape - Interior - Food - Car - Wildlife - Architecture thumbnail: https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/49a32981-4aa2-410e-a5b1-35835bf20d00/padthumb base_model: stabilityai/stable-diffusion-xl-base-1.0 --- # Juggernaut XL + RunDiffusion Lightning! ![juggernaut XL photo previews](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/49a32981-4aa2-410e-a5b1-35835bf20d00/public) ![RunDiffusion Logo](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/ca2b388d-a835-490c-dec0-e764bee8d000/micro) ## Want the full version of Juggernaut? Try v9! [Juggernaut v9 + RunDiffusion Photo v2](https://huggingface.co/RunDiffusion/Juggernaut-XL-v9) This model is not permitted to be used behind API services. Please contact [[email protected]](mailto:[email protected]) for business inquires, commercial licensing, custom models, and consultation. Juggernaut is available on the new Auto1111 Forge on [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) #Juggernaut XL Lightning is here Get ready for speed and quality. Who ever said you couldn't have both?! Now you can with the worlds most downloaded model series! Here are some tips to get you started. Use this in Automatic1111 and Automatic1111 Forge (Both available on [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo)) Start with your favorite prompt and negative prompt. - Set the sampler to: DPM++ SDE or DPM++ SDE Karras - Set the steps between 5 and 7 - Set the CFG between 1.5 and 2 - Set the resolution to >= 1024x1024 ![Settings Here](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/d89ce182-e42d-4b41-eeed-03797457de00/public)
brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
brucethemoose
2024-03-11T20:09:21Z
1,392
11
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "merge", "en", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-09T07:18:23Z
--- language: - en license: other library_name: transformers tags: - text-generation-inference - merge license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE pipeline_tag: text-generation model-index: - name: CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.41 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.77 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 77.44 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 57.84 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 61.33 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity name: Open LLM Leaderboard --- ### Possibly obsolete, replaced by https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5 Old model description below: *** **Dolphin-2.2-yi-34b-200k**, **Nous-Capybara-34B**, **Tess-M-v1.4**, **Airoboros-3_1-yi-34b-200k**, **PlatYi-34B-200K-Q**, and **Una-xaberius-34b-v1beta** merged with a new, experimental implementation of "dare ties" via mergekit. See: > [Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch](https://github.com/yule-BUAA/MergeLM) > https://github.com/cg123/mergekit/tree/dare This variant is merged with a "higher than recommended" density with with the following config, and the tokenizer from chargoddard's Yi-Llama: ``` models: - model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama # no parameters necessary for base model - model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4 parameters: weight: 0.19 density: 0.6 - model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k parameters: weight: 0.14 density: 0.5 - model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B parameters: weight: 0.19 density: 0.6 - model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200K-Q parameters: weight: 0.14 density: 0.5 - model: /home/alpha/FastModels/ehartford_dolphin-2.2-yi-34b-200k parameters: weight: 0.19 density: 0.6 - model: /home/alpha/FastModels/fblgit_una-xaberius-34b-v1beta parameters: weight: 0.15 density: 0.08 merge_method: dare_ties base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama parameters: int8_mask: true dtype: bfloat16 ``` *** ## Prompt template: Orca-Vicuna? ``` SYSTEM: {system_message} USER: {prompt} ASSISTANT: ``` It might recognize ChatML from Dolphin+Xaberius, and Llama-chat from Airoboros. Sometimes the model "spells out" the stop token as `</s>` like Capybara, so you may need to add `</s>` as an additional stopping condition. *** ## Running Being a Yi model, try disabling the BOS token and/or running a lower temperature with 0.05-0.13 MinP, a little repitition penalty, and no other samplers. Yi tends to run "hot" by default. 24GB GPUs can run Yi-34B-200K models at **45K-75K context** with exllamav2. I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/) I recommend exl2 quantizations profiled on data similar to the desired task. It is especially sensitive to the quantization data at low bpw! I published my own quantizations on vicuuna chat + fiction writing here: [4bpw](https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-34B-200K-exl2-4bpw-fiction) [3.1bpw](https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-34B-200K-exl2-4bpw-fiction) To load this in full-context backends like transformers and vllm, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM! *** ## Testing Notes Various densities were tested with perplexity tests and long context prompts. Relatively high densities seem to perform better, contrary to the findings of the Super Mario paper. This particular version is merged with more than the "recommended" max density of 0.5. It seems to result in even better perplexity, and a much higher position on the hf leaderboard, but I'm not sure if this translates to better output. Weights that add up to 1 seems to be optimal. Dare Ties is also resulting in seemingly better, lower perplexity merges than a regular ties merge, task arithmetic or a slerp merge. Xaberuis is not a 200K model, hence it was merged at a very low density to try and preserve Yi 200K's long context performance while still inheriting some of Xaberius's performance. I chose not to include other finetunes because they aren't trained on the 200K base. If any other 200K finetunes pop up, let me know. *** ## Credits: https://github.com/cg123/mergekit/tree/dare https://huggingface.co/ehartford/dolphin-2.2-yi-34b-200k https://huggingface.co/kyujinpy/PlatYi-34B-200K-Q https://huggingface.co/NousResearch/Nous-Capybara-34B/ https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k https://huggingface.co/migtissera/Tess-M-v1.4 https://huggingface.co/fblgit/una-xaberius-34b-v1beta https://huggingface.co/chargoddard/Yi-34B-200K-Llama https://huggingface.co/01-ai/Yi-34B-200K # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_brucethemoose__CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity) | Metric |Value| |---------------------------------|----:| |Avg. |72.15| |AI2 Reasoning Challenge (25-Shot)|67.41| |HellaSwag (10-Shot) |85.77| |MMLU (5-Shot) |77.44| |TruthfulQA (0-shot) |57.84| |Winogrande (5-shot) |83.11| |GSM8k (5-shot) |61.33|
brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-ExtremeDensity
brucethemoose
2024-03-11T20:09:17Z
1,397
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-09T16:39:14Z
--- license: other tags: - merge license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE model-index: - name: CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-ExtremeDensity results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.89 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-ExtremeDensity name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.69 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-ExtremeDensity name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 77.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-ExtremeDensity name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 57.63 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-ExtremeDensity name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-ExtremeDensity name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 59.82 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-ExtremeDensity name: Open LLM Leaderboard --- Just a test of a very high density DARE ties merge, for benchmarking on the open llm leaderboard. You probably shouldn't use this model, use this one instead: https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity mergekit config: ``` models: - model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama # no parameters necessary for base model - model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4 parameters: weight: 0.19 density: 0.83 - model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k parameters: weight: 0.14 density: 0.6 - model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B parameters: weight: 0.19 density: 0.83 - model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200K-Q parameters: weight: 0.14 density: 0.6 - model: /home/alpha/FastModels/ehartford_dolphin-2.2-yi-34b-200k parameters: weight: 0.19 density: 0.83 - model: /home/alpha/FastModels/fblgit_una-xaberius-34b-v1beta parameters: weight: 0.15 density: 0.08 merge_method: dare_ties base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama parameters: int8_mask: true dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_brucethemoose__CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-ExtremeDensity) | Metric |Value| |---------------------------------|----:| |Avg. |71.57| |AI2 Reasoning Challenge (25-Shot)|66.89| |HellaSwag (10-Shot) |85.69| |MMLU (5-Shot) |77.35| |TruthfulQA (0-shot) |57.63| |Winogrande (5-shot) |82.00| |GSM8k (5-shot) |59.82|
RunDiffusion/Juggernaut-XL-v6
RunDiffusion
2024-03-11T20:08:41Z
260,317
3
diffusers
[ "diffusers", "art", "people", "diffusion", "Cinematic", "Photography", "Landscape", "Interior", "Food", "Car", "Wildlife", "Architecture", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-02-22T00:14:34Z
--- language: - en license: creativeml-openrail-m library_name: diffusers tags: - art - people - diffusion - Cinematic - Photography - Landscape - Interior - Food - Car - Wildlife - Architecture thumbnail: https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/a38aa9e8-e3cf-4d43-afbd-fd1de0896500/padthumb base_model: stabilityai/stable-diffusion-xl-base-1.0 --- # Juggernaut XL v6 + RunDiffusion Photo v1 Official ![juggernaut XL photo previews](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/a38aa9e8-e3cf-4d43-afbd-fd1de0896500/public) ![RunDiffusion Logo](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/ca2b388d-a835-490c-dec0-e764bee8d000/micro) ## Juggernaut v9 is here! [Juggernaut v9 + RunDiffusion Photo v2](https://huggingface.co/RunDiffusion/Juggernaut-XL-v9) This model is not permitted to be used behind API services. Please contact [[email protected]](mailto:[email protected]) for business inquires, commercial licensing, custom models, and consultation. Juggernaut is available on the new Auto1111 Forge on [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) A big thanks for Version 6 goes to [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) ([Photo Model](https://rundiffusion.com/rundiffusion-photo/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo)) and [Adam](https://twitter.com/Colorblind_Adam), who diligently helped me test :) (Leave some love for them ;) ) For business inquires, commercial licensing, custom models, and consultation contact me under [email protected]
Sukanth07/gemma-2b-plm-ft
Sukanth07
2024-03-11T20:08:28Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-11T20:07:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RunDiffusion/Juggernaut-XL-v7-fp16-vae-fix
RunDiffusion
2024-03-11T20:07:02Z
61
1
diffusers
[ "diffusers", "art", "people", "diffusion", "Cinematic", "Photography", "Landscape", "Interior", "Food", "Car", "Wildlife", "Architecture", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-02-22T00:14:59Z
--- language: - en license: creativeml-openrail-m library_name: diffusers tags: - art - people - diffusion - Cinematic - Photography - Landscape - Interior - Food - Car - Wildlife - Architecture thumbnail: https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/dfaf8264-1355-413a-504d-eb792e69da00/padthumb base_model: stabilityai/stable-diffusion-xl-base-1.0 --- # Juggernaut XL v7 FP16 VAE Fix + RunDiffusion Photo v1 Official ![juggernaut XL photo previews](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/dfaf8264-1355-413a-504d-eb792e69da00/public) ![RunDiffusion Logo](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/ca2b388d-a835-490c-dec0-e764bee8d000/micro) ## Juggernaut v9 is here! [Juggernaut v9 + RunDiffusion Photo v2](https://huggingface.co/RunDiffusion/Juggernaut-XL-v9) This model is not permitted to be used behind API services. Please contact [[email protected]](mailto:[email protected]) for business inquires, commercial licensing, custom models, and consultation. Juggernaut is available on the new Auto1111 Forge on [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) A big thanks for Version v7 FP16 VAE Fix goes to [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) ([Photo Model](https://rundiffusion.com/rundiffusion-photo/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo)) and [Adam](https://twitter.com/Colorblind_Adam), who diligently helped me test :) (Leave some love for them ;) ) For business inquires, commercial licensing, custom models, and consultation contact me under [email protected]
RunDiffusion/Juggernaut-XL-v5
RunDiffusion
2024-03-11T20:06:36Z
60
0
diffusers
[ "diffusers", "art", "people", "diffusion", "Cinematic", "Photography", "Landscape", "Interior", "Food", "Car", "Wildlife", "Architecture", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-02-22T00:58:09Z
--- language: - en license: creativeml-openrail-m library_name: diffusers tags: - art - people - diffusion - Cinematic - Photography - Landscape - Interior - Food - Car - Wildlife - Architecture thumbnail: https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/e93ca50b-aadc-4645-2aa6-2931b5a26900/padthumb base_model: stabilityai/stable-diffusion-xl-base-1.0 --- # Juggernaut XL v5 Official ![juggernaut XL photo previews](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/e93ca50b-aadc-4645-2aa6-2931b5a26900/public) ![RunDiffusion Logo](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/ca2b388d-a835-490c-dec0-e764bee8d000/micro) ## Juggernaut v9 is here! [Juggernaut v9 + RunDiffusion Photo v2](https://huggingface.co/RunDiffusion/Juggernaut-XL-v9) This model is not permitted to be used behind API services. Please contact [[email protected]](mailto:[email protected]) for business inquires, commercial licensing, custom models, and consultation. Juggernaut is available on the new Auto1111 Forge on [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) For business inquires, commercial licensing, custom models, and consultation contact me under [email protected]
brucethemoose/Yi-34B-200K-DARE-megamerge-v8
brucethemoose
2024-03-11T20:05:56Z
176
27
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "Yi", "en", "arxiv:2311.03099", "arxiv:2306.01708", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-14T18:13:39Z
--- language: - en license: other library_name: transformers tags: - mergekit - merge - Yi license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE base_model: [] model-index: - name: Yi-34B-200K-DARE-megamerge-v8 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.75 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.06 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 77.03 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.31 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 65.43 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8 name: Open LLM Leaderboard --- # Yi 34B 200K DARE Merge v8 A merge of many Yi 34B 200K models using the new DARE Ties method via mergekit. The goal is to create a merge model that excels at 32K+ context performance, without any additional finetuning. ## Prompt template: Orca-Vicuna ``` SYSTEM: {system_message} USER: {prompt} ASSISTANT: ``` It might recognize ChatML, and possibly Alpaca-like formats. Raw prompting as described here is also effective: https://old.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/ ## Running Being a Yi model, run a lower temperature with 0.1 or higher MinP, a little repetition penalty, maybe mirostat with a low tau, and no other samplers. Yi tends to run "hot" by default, and it really needs a low temperature + MinP to cull Yi's huge vocabulary. See the explanation here: https://github.com/ggerganov/llama.cpp/pull/3841 24GB GPUs can efficiently run Yi-34B-200K models at **40K-90K context** with exllamav2, and performant UIs like [exui](https://github.com/turboderp/exui). I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/). 16GB GPUs can still run the high context with aggressive quantization. Lonestriker has also uploaded general purpose quantizations here: https://huggingface.co/models?sort=trending&search=LoneStriker+Yi-34B-200K-DARE-megamerge-v8 Additionally, TheBloke has uploaded experimental GGUFs using llama.cpp's new imatrix quantization feature, profiled on VMware open-instruct: https://huggingface.co/TheBloke/Yi-34B-200K-DARE-megamerge-v8-GGUF To load/train this in full-context backends like transformers, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM! I do not recommend running high context without context-efficient backends like exllamav2, litellm or unsloth. ## Testing Notes See: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5#testing-notes An intermediate merge model was created to try and extend the context of several 4k models before adding them to the main merge, as seen in the "megamerge" recipe below. I can upload this upon request In addition, the weight gradients are biased towards Vicuna-format models in the first few layers to try and "emphasize" the Orca-Vicuna prompt template. How sucessful this is remains to be seen. ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama as a base. ### Models Merged The following models were included in the merge: * https://huggingface.co/kyujinpy/PlatYi-34B-200k-Q-FastChat * https://huggingface.co/jondurbin/bagel-34b-v0.2 * https://huggingface.co/migtissera/Tess-M-Creative-v1.0 * https://huggingface.co/brucethemoose/SUS-Bagel-200K-DARE-Test * https://huggingface.co/Mihaiii/Pallas-0.5 * https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k * https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2 * https://huggingface.co/migtissera/Tess-34B-v1.4 * https://huggingface.co/SUSTech/SUS-Chat-34B * https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2 * https://huggingface.co/bhenrym14/platypus-yi-34b * https://huggingface.co/Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp * https://huggingface.co/TriadParty/deepsex-34b * https://huggingface.co/TriadParty/deepmoney-34b-200k-base * https://huggingface.co/chargoddard/Yi-34B-200K-Llama * https://huggingface.co/chargoddard/Yi-34B-Llama ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama # No parameters necessary for base model - model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama #200K base to extend the context of 4K models, max density as we *want* it to 'interfere' parameters: weight: 0.33 density: 1 - model: /home/alpha/Models/Raw/Weyaxi_Nous-Hermes-2-SUS-Chat-34B-Slerp parameters: weight: 0.15 density: 0.36 - model: /home/alpha/Models/Raw/jondurbin_bagel-dpo-34b-v0.2 #Mix dpo with sft to tone down dpo parameters: weight: 0.06 density: 0.36 - model: /home/alpha/Models/Raw/jondurbin_bagel-34b-v0.2 parameters: weight: 0.06 density: 0.41 - model: /home/alpha/Models/Raw/bhenrym14_platypus-yi-34b #Vicuna format parameters: weight: 0.19 density: 0.41 # - model: /home/alpha/Models/Raw/01-ai_Yi-34B-Chat #+/home/alpha/Models/Raw/Doctor-Shotgun_limarpv3-yi-llama-34b-lora # #Can't get lora OR base model to work without erroring out? # parameters: # weight: 0.04 # density: 0.36 - model: /home/alpha/Models/Raw/TriadParty_deepsex-34b #Base model with no prompt parameters: weight: 0.21 density: 0.39 merge_method: dare_ties tokenizer_source: union base_model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama parameters: int8_mask: true dtype: bfloat16 name: 4kmerge-v2 --- models: - model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama # No parameters necessary for base model - model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4 #Emphasize the beginning of Vicuna format models parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 - model: /home/alpha/Models/Raw/Mihaiii_Pallas-0.5 # Vicuna format parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 - model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k parameters: weight: [0.02, 0.081, 0.081, 0.081, 0.081, 0.081] density: 0.59 - model: /home/alpha/Storage/Models/Raw/jondurbin_bagel-34b-v0.2 #Only the SFT in the main merge since the DPO version seems to have no long context ability at all, and some overfitting(?) issues parameters: weight: [0.02, 0.093, 0.093, 0.093, 0.093, 0.093] density: 0.4 - model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200k-Q-FastChat parameters: weight: [0.02, 0.081, 0.081, 0.081, 0.081, 0.081] density: 0.59 #- model: /home/alpha/Storage/Models/Raw/ehartford_dolphin-2.2-yi-34b-200k # Dolphin 200K seems to be funky according to multiple leaderboards and perplexity tests? # parameters: # weight: 0.15 # density: 0.6 - model: /home/alpha/Models/Raw/adamo1139_Yi-34B-200K-AEZAKMI-v2 parameters: weight: [0.02, 0.096, 0.096, 0.096, 0.096, 0.096] density: 0.59 - model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B parameters: weight: [0.21, 0.115, 0.115, 0.115, 0.115, 0.115] density: 0.59 - model: 4kmerge-v2 #Previous merge parameters: weight: [0.02, 0.115, 0.115, 0.115, 0.115, 0.115] density: 0.4 - model: /home/alpha/Models/Raw/migtissera_Tess-M-Creative-v1.0 # Vicuna format parameters: weight: [0.21, 0.09, 0.09, 0.09, 0.09, 0.09] density: 0.61 - model: /home/alpha/Models/Raw/TriadParty_deepmoney-34b-200k-base # No prompt format, native long context full finetune parameters: weight: [0.04, 0.103, 0.103, 0.103, 0.103, 0.103] density: 0.61 merge_method: dare_ties tokenizer_source: union base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama parameters: int8_mask: true dtype: bfloat16 ``` ## Self Promotion I'm part of a AI startup called Holocene AI! We're new, busy, and still setting things up. But if you have any business inquiries, want a job, or just want some consultation, feel free to shoot me an email. We have expertise in RAG applications and llama/embeddings model finetuning, and absolutely *none* of the nonsense of scammy AI startups. Contact me at: [email protected] I also set up a Ko-Fi! I want to run some (personal) training/LASERing as well, at 100K context or so. If you'd like to buy me 10 minutes on an A100 (or 5 seconds on an MI300X), I'd appreciate it: https://ko-fi.com/alphaatlas # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_brucethemoose__Yi-34B-200K-DARE-megamerge-v8) | Metric |Value| |---------------------------------|----:| |Avg. |72.56| |AI2 Reasoning Challenge (25-Shot)|67.75| |HellaSwag (10-Shot) |86.06| |MMLU (5-Shot) |77.03| |TruthfulQA (0-shot) |56.31| |Winogrande (5-shot) |82.79| |GSM8k (5-shot) |65.43|
RunDiffusion/Juggernaut-XL
RunDiffusion
2024-03-11T20:05:53Z
185
1
diffusers
[ "diffusers", "art", "people", "diffusion", "Cinematic", "Photography", "Landscape", "Interior", "Food", "Car", "Wildlife", "Architecture", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-02-22T00:43:28Z
--- language: - en license: creativeml-openrail-m library_name: diffusers tags: - art - people - diffusion - Cinematic - Photography - Landscape - Interior - Food - Car - Wildlife - Architecture thumbnail: https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/def40db4-42d0-4e45-2baf-bebed29ae000/padthumb base_model: stabilityai/stable-diffusion-xl-base-1.0 --- # Juggernaut XL v2 Official ![juggernaut XL photo previews](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/def40db4-42d0-4e45-2baf-bebed29ae000/public) ![RunDiffusion Logo](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/ca2b388d-a835-490c-dec0-e764bee8d000/micro) ## Juggernaut v9 is here! [Juggernaut v9 + RunDiffusion Photo v2](https://huggingface.co/RunDiffusion/Juggernaut-XL-v9) Version 2 is technically the best version from the first four versions and should be used. This model is not permitted to be used behind API services. Please contact [[email protected]](mailto:[email protected]) for business inquires, commercial licensing, custom models, and consultation. Juggernaut is available on the new Auto1111 Forge on [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) For business inquires, commercial licensing, custom models, and consultation contact me under [email protected]
Holarissun/zephyr3b-aisft-gsm8k-seq
Holarissun
2024-03-11T20:01:55Z
1
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:stabilityai/stablelm-zephyr-3b", "base_model:adapter:stabilityai/stablelm-zephyr-3b", "license:other", "region:us" ]
null
2024-03-11T20:01:51Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: stabilityai/stablelm-zephyr-3b model-index: - name: zephyr3b-aisft-gsm8k-seq results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr3b-aisft-gsm8k-seq This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Holarissun/zephyr3b-aisft-gsm8k-rand
Holarissun
2024-03-11T19:59:11Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:stabilityai/stablelm-zephyr-3b", "base_model:adapter:stabilityai/stablelm-zephyr-3b", "license:other", "region:us" ]
null
2024-03-11T19:59:08Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: stabilityai/stablelm-zephyr-3b model-index: - name: zephyr3b-aisft-gsm8k-rand results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr3b-aisft-gsm8k-rand This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
mehmettozlu/multilingual-xlm-roberta-for-ner
mehmettozlu
2024-03-11T19:57:21Z
91
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-11T19:20:51Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: multilingual-xlm-roberta-for-ner results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: validation args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8625248226950355 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multilingual-xlm-roberta-for-ner This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1350 - F1: 0.8625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2657 | 1.0 | 525 | 0.1631 | 0.8156 | | 0.1275 | 2.0 | 1050 | 0.1370 | 0.8521 | | 0.0797 | 3.0 | 1575 | 0.1350 | 0.8625 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF
tolgadev
2024-03-11T19:52:06Z
90
1
transformers
[ "transformers", "gguf", "trendyol", "llama-2", "turkish", "text-generation", "tr", "en", "base_model:Trendyol/Trendyol-LLM-7b-chat-v1.0", "base_model:quantized:Trendyol/Trendyol-LLM-7b-chat-v1.0", "license:apache-2.0", "region:us" ]
text-generation
2024-03-11T14:21:33Z
--- model_name: Trendyol-LLM-7b-chat-v1.0-gguf model_creator: Trendyol base_model: Trendyol/Trendyol-LLM-7b-chat-v1.0 language: - tr - en pipeline_tag: text-generation license: apache-2.0 model_type: llama library_name: transformers inference: false tags: - trendyol - llama-2 - turkish quantized_by: tolgadev --- ## Trendyol-LLM-7b-chat-v1.0-gguf models ---- ## Description This repo contains all types of GGUF formatted model files for [Trendyol-LLM-7b-chat-v1.0](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.0). <img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.0/resolve/main/trendyol-llm-mistral.jpg" alt="drawing" width="400"/> ## Quantized LLM models and methods | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [Trendyol-LLM-7b-chat-v1.0.Q2_K.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q2_K.gguf) | Q2_K | 2 | 2.59 GB| 4.88 GB | smallest, significant quality loss - not recommended for most purposes | | [Trendyol-LLM-7b-chat-v1.0.Q3_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 3.01 GB| 5.56 GB | very small, high quality loss | | [Trendyol-LLM-7b-chat-v1.0.Q3_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 3.36 GB| 5.91 GB | very small, high quality loss | | [Trendyol-LLM-7b-chat-v1.0.Q3_K_L.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 3.66 GB| 6.20 GB | small, substantial quality loss | | [Trendyol-LLM-7b-chat-v1.0.Q4_0.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q4_0.gguf) | Q4_0 | 4 | 3.9 GB| 6.45 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Trendyol-LLM-7b-chat-v1.0.Q4_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 3.93 GB| 6.48 GB | small, greater quality loss | | [Trendyol-LLM-7b-chat-v1.0.Q4_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 4.15 GB| 6.69 GB | medium, balanced quality - recommended | | [Trendyol-LLM-7b-chat-v1.0.Q5_0.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q5_0.gguf) | Q5_0 | 5 | 4.73 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [Trendyol-LLM-7b-chat-v1.0.Q5_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 4.75 GB| 7.27 GB | large, low quality loss - recommended | | [Trendyol-LLM-7b-chat-v1.0.Q5_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 4.86 GB| 7.40 GB | large, very low quality loss - recommended | | [Trendyol-LLM-7b-chat-v1.0.Q6_K.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q6_K.gguf) | Q6_K | 6 | 5.61 GB| 8.15 GB | very large, extremely low quality loss | The names of the quantization methods follow the naming convention: "q" + the number of bits + the variant used (detailed below). Here is a list of all the models and their corresponding use cases, based on model cards made by [TheBloke](https://huggingface.co/TheBloke/): * `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors. * `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_s`: Uses Q3_K for all tensors * `q4_0`: Original quant method, 4-bit. * `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. * `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * `q4_k_s`: Uses Q4_K for all tensors * `q5_0`: Higher accuracy, higher resource usage and slower inference. * `q5_1`: Even higher accuracy, resource usage and slower inference. * `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * `q5_k_s`: Uses Q5_K for all tensors * `q6_k`: Uses Q8_K for all tensors **TheBloke recommends using Q5_K_M** as it preserves most of the model's performance. Alternatively, you can use Q4_K_M if you want to save some memory. In general, K_M versions are better than K_S versions. ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ## Special thanks to [TheBloke on Huggingface](https://huggingface.co/TheBloke) and [Maxime Labonne on Github](https://github.com/mlabonne/llm-course) ----- # **Trendyol LLM v1.0 - DPO** Trendyol LLM v1.0 - DPO is a generative model that is based on Mistral 7B model. DPO training was applied. This is the repository for the chat model. ## Model Details **Model Developers** Trendyol **Variations** [base](https://huggingface.co/Trendyol/Trendyol-LLM-7b-base-v1.0), [chat](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.0), and dpo variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Trendyol LLM is an auto-regressive language model (based on Mistral 7b) that uses an optimized transformer architecture. Huggingface TRL lib was used for training. The DPO version is fine-tuned on 11K sets (prompt-chosen-reject) with the following trainables by using LoRA: - **lr**=5e-6 - **lora_rank**=64 - **lora_alpha**=128 - **lora_trainable**=q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj - **lora_dropout**=0.05 - **bf16**=True - **beta**=0.01 - **max_length**= 1024 - **max_prompt_length**= 512 - **lr_scheduler_type**= cosine - **torch_dtype**= bfloat16 <img src="https://camo.githubusercontent.com/3e61ca080778f62988b459c7321726fa35bb3776ceb07ecaabf71ebca44f95a7/68747470733a2f2f68756767696e67666163652e636f2f64617461736574732f74726c2d696e7465726e616c2d74657374696e672f6578616d706c652d696d616765732f7265736f6c76652f6d61696e2f696d616765732f74726c5f62616e6e65725f6461726b2e706e67" alt="drawing" width="600"/> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png" alt="drawing" width="600"/> ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_id = "Trendyol/Trendyol-LLM-7b-chat-v1.0" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto', load_in_8bit=True) sampling_params = dict(do_sample=True, temperature=0.3, top_k=50, top_p=0.9) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map="auto", max_new_tokens=1024, return_full_text=True, repetition_penalty=1.1 ) DEFAULT_SYSTEM_PROMPT = "Sen yardımcı bir asistansın ve sana verilen talimatlar doğrultusunda en iyi cevabı üretmeye çalışacaksın.\n" TEMPLATE = ( "[INST] {system_prompt}\n\n" "{instruction} [/INST]" ) def generate_prompt(instruction, system_prompt=DEFAULT_SYSTEM_PROMPT): return TEMPLATE.format_map({'instruction': instruction,'system_prompt': system_prompt}) def generate_output(user_query, sys_prompt=DEFAULT_SYSTEM_PROMPT): prompt = generate_prompt(user_query, sys_prompt) outputs = pipe(prompt, **sampling_params ) return outputs[0]["generated_text"].split("[/INST]")[-1] user_query = "Türkiye'de kaç il var?" response = generate_output(user_query) print(response) ``` with chat template: ```python pipe = pipeline("conversational", model=model, tokenizer=tokenizer, device_map="auto", max_new_tokens=1024, repetition_penalty=1.1 ) messages = [ {"role": "user", "content": "Türkiye'de kaç il var?"} ] outputs = pipe(messages, **sampling_params) print(outputs) ``` ## Limitations, Risks, Bias, and Ethical Considerations ### Limitations and Known Biases - **Primary Function and Application:** Trendyol LLM, an autoregressive language model, is primarily designed to predict the next token in a text string. While often used for various applications, it is important to note that it has not undergone extensive real-world application testing. Its effectiveness and reliability across diverse scenarios remain largely unverified. - **Language Comprehension and Generation:** The model is primarily trained in standard English and Turkish. Its performance in understanding and generating slang, informal language, or other languages may be limited, leading to potential errors or misinterpretations. - **Generation of False Information:** Users should be aware that Trendyol LLM may produce inaccurate or misleading information. Outputs should be considered as starting points or suggestions rather than definitive answers. ### Risks and Ethical Considerations - **Potential for Harmful Use:** There is a risk that Trendyol LLM could be used to generate offensive or harmful language. We strongly discourage its use for any such purposes and emphasize the need for application-specific safety and fairness evaluations before deployment. - **Unintended Content and Bias:** The model was trained on a large corpus of text data, which was not explicitly checked for offensive content or existing biases. Consequently, it may inadvertently produce content that reflects these biases or inaccuracies. - **Toxicity:** Despite efforts to select appropriate training data, the model is capable of generating harmful content, especially when prompted explicitly. We encourage the open-source community to engage in developing strategies to minimize such risks. ### Recommendations for Safe and Ethical Usage - **Human Oversight:** We recommend incorporating a human curation layer or using filters to manage and improve the quality of outputs, especially in public-facing applications. This approach can help mitigate the risk of generating objectionable content unexpectedly. - **Application-Specific Testing:** Developers intending to use Trendyol LLM should conduct thorough safety testing and optimization tailored to their specific applications. This is crucial, as the model’s responses can be unpredictable and may occasionally be biased, inaccurate, or offensive. - **Responsible Development and Deployment:** It is the responsibility of developers and users of Trendyol LLM to ensure its ethical and safe application. We urge users to be mindful of the model's limitations and to employ appropriate safeguards to prevent misuse or harmful consequences.
mHossain/ml_sum_v3
mHossain
2024-03-11T19:47:57Z
94
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:mHossain/ml_sum_v2", "base_model:finetune:mHossain/ml_sum_v2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-09T17:39:19Z
--- license: apache-2.0 base_model: mHossain/ml_sum_v2 tags: - generated_from_trainer metrics: - rouge model-index: - name: ml_sum_v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ml_sum_v3 This model is a fine-tuned version of [mHossain/ml_sum_v2](https://huggingface.co/mHossain/ml_sum_v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 - Gen Len: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5000 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 312 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.9648 | 2.0 | 625 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.9648 | 3.0 | 936 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
papahawk/falcon-40b
papahawk
2024-03-11T19:44:24Z
15
1
transformers
[ "transformers", "pytorch", "RefinedWeb", "text-generation", "custom_code", "en", "de", "es", "fr", "dataset:tiiuae/falcon-refinedweb", "arxiv:2205.14135", "arxiv:1911.02150", "arxiv:2101.00027", "arxiv:2005.14165", "arxiv:2104.09864", "arxiv:2306.01116", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-04T22:06:07Z
--- datasets: - tiiuae/falcon-refinedweb language: - en - de - es - fr pipeline_tag: text-generation inference: false license: apache-2.0 --- <h1 style='text-align: center '>🚀 Falcon-40B</h1> <h1 style='text-align: center '><em>fork of tiiuae/falcon-40b</em> </h1> <h2 style='text-align: center '><em>Technology Innovation Institute (TII) LLM</em> </h2> <h3 style='text-align: center '>All credit and thanks to TII for their work!</h3> <img src="https://alt-web.xyz/images/rainbow.png" alt="Rainbow Solutions" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> **Falcon-40B is a 40B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.** *Paper coming soon 😊.* # Call for Proposals : Falcon 40B - World's Top Ranked AI Model Empowers Exceptional Use Cases with Training Compute Power in Call for Proposals We get it. AI is everywhere! Is it taking over? Before we debate the scant likelihood of a cyborg assassin from the future terminating humanity, let’s get to know the newbie that has soared to top-spot on the leaderboard – Falcon 40B. Falcon 40B is the UAE’s and the Middle East’s first home-grown, open-source large language model (LLM) with 40 billion parameters trained on one trillion tokens. The brainchild of the Technology Innovation Institute (TII), Falcon 40B has generated a tremendous amount of global interest and intrigue, but what really sweetens the deal is its transparent, open-source feature. TII is now calling for proposals from users worldwide to submit their most creative ideas for Falcon 40B’s deployment – allowing them to share their knowledge, enhance the software, and potentially transform their ideas into reality! Take that, ChatGPT! Worth checking out? Give it a go and see for yourself! Submit your proposal today! https://falconllm.tii.ae/call-for-proposal.php 🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)! ## Why use Falcon-40B? * **It is the best open-source model currently available.** Falcon-40B outperforms [LLaMA](https://github.com/facebookresearch/llama), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT](https://huggingface.co/mosaicml/mpt-7b), etc. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)). * **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions. * ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct). 💸 **Looking for a smaller, less expensive model?** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) is Falcon-40B's little brother! ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-40b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!** For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon). You will need **at least 85-100GB of memory** to swiftly run inference with Falcon-40B. # Model Card for Falcon-40B ## Model Details ### Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae); - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish); - **License:** Apache 2.0 license. ### Model Source - **Paper:** *coming soon*. ## Uses ### Direct Use Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.) ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations Falcon-40B is trained mostly on English, German, Spanish, French, with limited capabilities also in in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of Falcon-40B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-40b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Falcon-40B was trained on 1,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)). | **Data source** | **Fraction** | **Tokens** | **Sources** | |--------------------|--------------|------------|-----------------------------------| | [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 75% | 750B | massive web crawl | | RefinedWeb-Europe | 7% | 70B | European massive web crawl | | Books | 6% | 60B | | | Conversations | 5% | 50B | Reddit, StackOverflow, HackerNews | | Code | 5% | 50B | | | Technical | 2% | 20B | arXiv, PubMed, USPTO, etc. | RefinedWeb-Europe is made of the following languages: | **Language** | **Fraction of multilingual data** | **Tokens** | |--------------|-----------------------------------|------------| | German | 26% | 18B | | Spanish | 24% | 17B | | French | 23% | 16B | | _Italian_ | 7% | 5B | | _Portuguese_ | 4% | 3B | | _Polish_ | 4% | 3B | | _Dutch_ | 4% | 3B | | _Romanian_ | 3% | 2B | | _Czech_ | 3% | 2B | | _Swedish_ | 2% | 1B | The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer. ### Training Procedure Falcon-40B was trained on 384 A100 40GB GPUs, using a 3D parallelism strategy (TP=8, PP=4, DP=12) combined with ZeRO. #### Training Hyperparameters | **Hyperparameter** | **Value** | **Comment** | |--------------------|------------|-------------------------------------------| | Precision | `bfloat16` | | | Optimizer | AdamW | | | Learning rate | 1.85e-4 | 4B tokens warm-up, cosine decay to 1.85e-5 | | Weight decay | 1e-1 | | | Z-loss | 1e-4 | | | Batch size | 1152 | 100B tokens ramp-up | #### Speeds, Sizes, Times Training started in December 2022 and took two months. ## Evaluation *Paper coming soon.* See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results. ## Technical Specifications ### Model Architecture and Objective Falcon-40B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences: * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864)); * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)); * **Decoder-block:** parallel attention/MLP with a two layer norms. For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree. | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 60 | | | `d_model` | 8192 | | | `head_dim` | 64 | Reduced to optimise for FlashAttention | | Vocabulary | 65024 | | | Sequence length | 2048 | | ### Compute Infrastructure #### Hardware Falcon-40B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances. #### Software Falcon-40B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.) ## Citation *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite: ``` @article{falcon40b, title={{Falcon-40B}: an open large language model with state-of-the-art performance}, author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme}, year={2023} } ``` To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116). ``` @article{refinedweb, title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only}, author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay}, journal={arXiv preprint arXiv:2306.01116}, eprint={2306.01116}, eprinttype = {arXiv}, url={https://arxiv.org/abs/2306.01116}, year={2023} } ``` ## License Falcon-40B is made available under the Apache 2.0 license. ## Contact [email protected]
Thang203/us-only-mar11
Thang203
2024-03-11T19:39:43Z
1
0
bertopic
[ "bertopic", "text-classification", "region:us" ]
text-classification
2024-03-11T19:39:41Z
--- tags: - bertopic library_name: bertopic pipeline_tag: text-classification --- # us-only-mar11 This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("Thang203/us-only-mar11") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 20 * Number of training documents: 1908 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | models - language - model - language models - llms | 10 | -1_models_language_model_language models | | 0 | models - language - reasoning - language models - large | 616 | 0_models_language_reasoning_language models | | 1 | code - llms - language - models - programming | 467 | 1_code_llms_language_models | | 2 | learning - reinforcement - reinforcement learning - planning - rl | 139 | 2_learning_reinforcement_reinforcement learning_planning | | 3 | clinical - medical - models - language - data | 92 | 3_clinical_medical_models_language | | 4 | language - models - language models - llms - scaling | 86 | 4_language_models_language models_llms | | 5 | summarization - event - generation - events - text | 75 | 5_summarization_event_generation_events | | 6 | dialogue - dialog - systems - conversational - conversations | 59 | 6_dialogue_dialog_systems_conversational | | 7 | text - adversarial - attacks - detection - models | 58 | 7_text_adversarial_attacks_detection | | 8 | bias - biases - social - gender - models | 52 | 8_bias_biases_social_gender | | 9 | ai - chatgpt - ethical - artificial intelligence - intelligence | 49 | 9_ai_chatgpt_ethical_artificial intelligence | | 10 | education - students - programming - educational - questions | 49 | 10_education_students_programming_educational | | 11 | privacy - private - federated - attacks - models | 37 | 11_privacy_private_federated_attacks | | 12 | speech - audio - asr - speech recognition - recognition | 21 | 12_speech_audio_asr_speech recognition | | 13 | materials - chemistry - chemical - molecular - model | 20 | 13_materials_chemistry_chemical_molecular | | 14 | recommendation - user - item - reviews - news | 20 | 14_recommendation_user_item_reviews | | 15 | financial - sentiment - stock - data - market | 17 | 15_financial_sentiment_stock_data | | 16 | game - games - state - generate - state information | 15 | 16_game_games_state_generate | | 17 | legal - law - argumentative - court - standards | 14 | 17_legal_law_argumentative_court | | 18 | metadata - language - keyphrase - large - user intents | 12 | 18_metadata_language_keyphrase_large | </details> ## Training hyperparameters * calculate_probabilities: False * language: english * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: 20 * seed_topic_list: None * top_n_words: 10 * verbose: True * zeroshot_min_similarity: 0.7 * zeroshot_topic_list: None ## Framework versions * Numpy: 1.25.2 * HDBSCAN: 0.8.33 * UMAP: 0.5.5 * Pandas: 1.5.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.5.1 * Transformers: 4.38.2 * Numba: 0.58.1 * Plotly: 5.15.0 * Python: 3.10.12
adeebkm/EmotionDetectionModel
adeebkm
2024-03-11T19:23:42Z
0
0
tensorflow
[ "tensorflow", "tf", "text-classification", "bert", "en", "license:apache-2.0", "region:us" ]
text-classification
2024-03-11T18:55:52Z
--- language: en license: apache-2.0 tags: - text-classification - tensorflow - bert library_name: tensorflow --- # BERT Sentiment Classifier This model is a fine-tuned version of BERT (Bidirectional Encoder Representations from Transformers) designed to classify text sentiment into positive or negative. It's trained on a large corpus of movie reviews and can be adapted for similar natural language processing tasks. ## Requirements To use this model, you need the following packages: - TensorFlow 2.x - ktrain ## Installation First, ensure you have Python 3.6 or newer installed. Then, install the required packages using pip: ```bash pip install tensorflow ktrain ``` ## Loading the Predictor To load the predictor, use the following code snippet. Ensure the model directory ('./model') is correctly specified to the location where you've downloaded the model files. ```python import ktrain predictor = ktrain.load_predictor('./model') ``` ## Making Predictions You can make predictions with the model as follows: ```python text = "I absolutely loved this movie! The acting was great and the story was compelling." prediction = predictor.predict(text) print("Sentiment:", "Positive" if prediction[0] == 1 else "Negative") ``` ## Model Files This model repository includes the following files: - `tf_model.h5`: The model weights. - `tf_model.preproc`: The preprocessing data for the model inputs, ensuring input data is in the correct format for prediction. ## Additional Notes This model is intended for educational and research purposes. It may require further tuning for optimal performance on specific tasks. For any questions or issues, please open an issue in the repository or contact the model maintainers.
jackshannon/phi-1_5-finetuned-question-generation
jackshannon
2024-03-11T19:22:46Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:adapter:microsoft/phi-1_5", "license:mit", "region:us" ]
null
2024-03-11T17:05:21Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer base_model: microsoft/phi-1_5 model-index: - name: phi-1_5-finetuned-question-generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1_5-finetuned-question-generation This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9170 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.9836 | 0.09 | 100 | 2.8641 | | 2.8536 | 0.17 | 200 | 2.7929 | | 2.8051 | 0.26 | 300 | 2.7567 | | 2.7782 | 0.35 | 400 | 2.7092 | | 2.7542 | 0.44 | 500 | 2.6946 | | 2.6978 | 0.52 | 600 | 2.6719 | | 2.6833 | 0.61 | 700 | 2.6497 | | 2.6504 | 0.7 | 800 | 2.6172 | | 2.6228 | 0.78 | 900 | 2.6008 | | 2.6219 | 0.87 | 1000 | 2.5802 | | 2.5629 | 0.96 | 1100 | 2.5519 | | 2.5315 | 1.05 | 1200 | 2.5255 | | 2.4813 | 1.13 | 1300 | 2.5156 | | 2.4539 | 1.22 | 1400 | 2.4884 | | 2.4466 | 1.31 | 1500 | 2.4660 | | 2.4205 | 1.39 | 1600 | 2.4431 | | 2.3937 | 1.48 | 1700 | 2.4238 | | 2.3686 | 1.57 | 1800 | 2.4069 | | 2.3209 | 1.66 | 1900 | 2.3826 | | 2.3409 | 1.74 | 2000 | 2.3606 | | 2.2874 | 1.83 | 2100 | 2.3453 | | 2.309 | 1.92 | 2200 | 2.3222 | | 2.2676 | 2.01 | 2300 | 2.2981 | | 2.1734 | 2.09 | 2400 | 2.2892 | | 2.1495 | 2.18 | 2500 | 2.2549 | | 2.1163 | 2.27 | 2600 | 2.2401 | | 2.1 | 2.35 | 2700 | 2.2317 | | 2.1046 | 2.44 | 2800 | 2.2153 | | 2.1138 | 2.53 | 2900 | 2.1938 | | 2.0691 | 2.62 | 3000 | 2.1775 | | 2.0945 | 2.7 | 3100 | 2.1563 | | 2.045 | 2.79 | 3200 | 2.1408 | | 2.0212 | 2.88 | 3300 | 2.1229 | | 2.0011 | 2.96 | 3400 | 2.1156 | | 1.983 | 3.05 | 3500 | 2.0942 | | 1.9309 | 3.14 | 3600 | 2.0769 | | 1.8844 | 3.23 | 3700 | 2.0709 | | 1.9085 | 3.31 | 3800 | 2.0589 | | 1.8827 | 3.4 | 3900 | 2.0405 | | 1.8511 | 3.49 | 4000 | 2.0310 | | 1.8807 | 3.57 | 4100 | 2.0170 | | 1.8437 | 3.66 | 4200 | 2.0045 | | 1.8667 | 3.75 | 4300 | 2.0036 | | 1.8081 | 3.84 | 4400 | 1.9886 | | 1.8688 | 3.92 | 4500 | 1.9767 | | 1.8187 | 4.01 | 4600 | 1.9652 | | 1.7511 | 4.1 | 4700 | 1.9592 | | 1.7384 | 4.18 | 4800 | 1.9558 | | 1.7843 | 4.27 | 4900 | 1.9474 | | 1.7389 | 4.36 | 5000 | 1.9412 | | 1.7465 | 4.45 | 5100 | 1.9346 | | 1.7483 | 4.53 | 5200 | 1.9290 | | 1.7149 | 4.62 | 5300 | 1.9246 | | 1.7154 | 4.71 | 5400 | 1.9211 | | 1.7637 | 4.8 | 5500 | 1.9188 | | 1.7559 | 4.88 | 5600 | 1.9181 | | 1.7204 | 4.97 | 5700 | 1.9170 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
ardasamet/mistral-7b-fake-ft
ardasamet
2024-03-11T19:19:30Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-03-11T19:04:39Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ model-index: - name: ardasamet/mistral-7b-fake-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ardasamet/mistral-7b-fake-ft This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8977 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.5918 | 0.92 | 3 | 3.9531 | | 4.0342 | 1.85 | 6 | 3.4187 | | 3.4573 | 2.77 | 9 | 2.9750 | | 2.2651 | 4.0 | 13 | 2.5798 | | 2.7052 | 4.92 | 16 | 2.3497 | | 2.4053 | 5.85 | 19 | 2.1773 | | 2.1965 | 6.77 | 22 | 2.0325 | | 1.5431 | 8.0 | 26 | 1.9541 | | 2.0077 | 8.92 | 29 | 1.9105 | | 1.3889 | 9.23 | 30 | 1.8977 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
dyumat/phi-1_5-arxiv-physics
dyumat
2024-03-11T19:19:22Z
35
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T19:16:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Nexesenex/Undi95_Miqu-70B-Alpaca-DPO-iMat.GGUF
Nexesenex
2024-03-11T19:18:46Z
94
3
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-02-08T19:30:57Z
GGUF Quants with iMatrix for : https://huggingface.co/Undi95/Miqu-70B-Alpaca-DPO Q3_K_M to be uploaded shortly. Q3_K_S, IQ3_XXS, Q2_K, Q2_K_S, IQ2_XS, IQ2_XXS to follow. LlamaCPP Benchs on the Q3_K_M with iMatrix shared here : - Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Hellaswag,84.5,,400,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex, - Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Hellaswag,83.6,,1000,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex, - Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Arc-Challenge,58.52842809,,299,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex, - Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Arc-Easy,77.36842105,,570,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex, - Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,MMLU,49.84025559,,313,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex, - Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Thruthful-QA,42.83965728,,817,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex, - Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,Winogrande,78.7687,,1267,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex, - Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,wikitext,4.2963,512,512,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,81 - Undi95_Miqu-70B-Alpaca-DPO-b2101-iMat-c32_ch1000-Q3_K_M.gguf,-,wikitext,3.8397,512,512,2024-02-07 00:00:00,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,Nexesenex,655 LlamaCPP Benchs on a non iMatrix Q3_K_M released by Undi95 : - Miqu-70B-DPO.q3_k_m.gguf,-,Hellaswag,84.5,400,,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,NeverSleep, - Miqu-70B-DPO.q3_k_m.gguf,-,Hellaswag,83.8,1000,,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,NeverSleep, - Miqu-70B-DPO.q3_k_m.gguf,-,Arc-Challenge,57.85953177,,299,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,NeverSleep, - Miqu-70B-DPO.q3_k_m.gguf,-,Arc-Easy,77.36842105,,570,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,NeverSleep, - Miqu-70B-DPO.q3_k_m.gguf,-,MMLU,50.15974441,,313,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,NeverSleep, - Miqu-70B-DPO.q3_k_m.gguf,-,Thruthful-QA,42.47246022,,817,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,NeverSleep, - Miqu-70B-DPO.q3_k_m.gguf,-,Winogrande,78.7687,,1267,2024-02-07 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,NeverSleep, - Miqu-70B-DPO.q3_k_m.gguf,-,wikitext,4.3018,512,512,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,NeverSleep,81 - Miqu-70B-DPO.q3_k_m.gguf,-,wikitext,3.8469,512,512,2024-02-07 00:00:00,,70b,Mistral_Medium,32768,,,GGUF,NeverSleep,NeverSleep,655 Quite convincing compared to the original Miqu.. with iMatrix : - Miqu-1-70b-Requant-b1989-iMat-c32_ch400-Q3_K_M.gguf,-,Arc-Challenge,57.19063545,,299,2024-01-29 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,- Miqudev,Nexesenex, - Miqu-1-70b-Requant-b1989-iMat-c32_ch400-Q3_K_M.gguf,-,Arc-Easy,77.19298246,,570,2024-01-29 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,- Miqudev,Nexesenex, - Miqu-1-70b-Requant-b1989-iMat-c32_ch400-Q3_K_M.gguf,-,MMLU,50.15974441,,313,2024-01-29 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,- Miqudev,Nexesenex, - Miqu-1-70b-Requant-b1989-iMat-c32_ch400-Q3_K_M.gguf,-,Thruthful-QA,41.49326805,,817,2024-01-29 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,- Miqudev,Nexesenex, - Miqu-1-70b-Requant-b1989-iMat-c32_ch400-Q3_K_M.gguf,-,Winogrande,78.8477,,1267,2024-01-29 05:40:00,,70b,Mistral_Medium,32768,,,GGUF,- Miqudev,Nexesenex, - Miqu-1-70b-Requant-b1989-iMat-c32_ch400-Q3_K_M.gguf,-,wikitext,4.2957,512,512,2024-01-29 00:00:00,RBF1000000,70b,Mistral_Medium,32768,,,GGUF,- Miqudev,Nexesenex,81 - Miqu-1-70b-Requant-b1989-iMat-c32_ch400-Q3_K_M.gguf,-,wikitext,3.8380,512,512,2024-01-29 00:00:00,RBF1000000,70b,Mistral_Medium,32768,,,GGUF,- Miqudev,Nexesenex,655 The TQA shows a slight bonus, thanks to the DPO training I believe. The slightly bonified ARC benchs (a rare thing on DPO releases!) and the respected perplexity show that the model was not dumbified by the DPO training. In ST, the models performs beautifully.
crncskn/radiovers16v
crncskn
2024-03-11T19:15:19Z
177
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit_mae", "pretraining", "masked-auto-encoding", "generated_from_trainer", "dataset:imagefolder", "endpoints_compatible", "region:us" ]
null
2024-03-11T17:04:25Z
--- tags: - masked-auto-encoding - generated_from_trainer datasets: - imagefolder model-index: - name: radiovers16v results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # radiovers16v This model is a fine-tuned version of [](https://huggingface.co/) on the /kaggle/radioai/radiology_ai dataset. It achieves the following results on the evaluation set: - Loss: 0.4036 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.125e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40.0 ### Training results ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Takekazuchi/Caracam
Takekazuchi
2024-03-11T19:09:16Z
177
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-19T05:24:47Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-vit-base-patch16 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.5851995594482614 --- # Caracam (gen 1) This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.9156 - Accuracy: 0.5852 ## Model description First generation of my AI that tells you what car you took a picture of. \ More versions coming soon with accuracy ratings of 85% and higher! Trained on 70+ brands with 2700+ cars going from 1945-2024. \ ***App coming soon (also called Caracam) to Android and IOS*** \ (Late March - Early April 2024). In the future I will take user opinion into account on what brands to add. The app will be updated semi-yearly with user-suggested car brands! \ if you wish to support project Caracam please visit my [Patreon](https://www.patreon.com/Caracam) or my [Cashapp](https://cash.app/$Clippayy)!! ## Intended uses & limitations ***NOT FOR COMMERCIAL USE OUTSIDE OF OFFICIAL CARACAM MOBILE APP*** ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 4.0308 | 1.0 | 5362 | 3.6948 | 0.2491 | | 2.694 | 2.0 | 10725 | 2.2586 | 0.5199 | | 2.4475 | 3.0 | 16086 | 1.9156 | 0.5852 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cpu - Datasets 2.16.1 - Tokenizers 0.15.0
cdillinger/cnn_news_summary_model_trained_on_reduced_data
cdillinger
2024-03-11T19:08:49Z
91
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-04T13:10:26Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: cnn_news_summary_model_trained_on_reduced_data results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cnn_news_summary_model_trained_on_reduced_data This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6040 - Rouge1: 0.2179 - Rouge2: 0.094 - Rougel: 0.184 - Rougelsum: 0.184 - Generated Length: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:| | No log | 1.0 | 431 | 1.6239 | 0.2175 | 0.0934 | 0.1831 | 0.183 | 19.0 | | 1.92 | 2.0 | 862 | 1.6075 | 0.2169 | 0.0933 | 0.1829 | 0.1827 | 19.0 | | 1.8221 | 3.0 | 1293 | 1.6040 | 0.2179 | 0.094 | 0.184 | 0.184 | 19.0 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_sign_acc_ef_signal_it_273
furrutiav
2024-03-11T19:06:23Z
91
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-03-11T19:01:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
watersplash/waste-classification
watersplash
2024-03-11T18:56:21Z
351
1
transformers
[ "transformers", "safetensors", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-24T22:44:32Z
--- library_name: transformers metrics: - accuracy --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> An Image Classifier model fine-tuned on ViT. This model can classify garbage images. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Finetuned from model :** ViT ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/KomaliValluru/waste-classification ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> - Target classes: Battery, Biological, Brown-grass, Cardboard, Clothes, Green-Glass, Metal, Paper, Plastic, Shoes, Trash, White-Glass ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://www.kaggle.com/datasets/mostafaabla/garbage-classification #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> Accuracy ### Results Accuracy: 98% #### Summary - **Hours used:** 1 hour 30 minutes - **References:** Based on the model yangy50/garbage-classification
dchatca/vistral_economics_summarization_v4.2
dchatca
2024-03-11T18:52:46Z
62
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-03-11T18:31:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ralux3/sdxl-lora
ralux3
2024-03-11T18:45:52Z
26
2
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "diffusers-training", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-03-06T16:56:47Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - diffusers-training - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a <s0><s1> blue bedroom, in the style of <s0><s1>' output: url: "image_0.png" - text: 'a <s0><s1> blue bedroom, in the style of <s0><s1>' output: url: "image_1.png" - text: 'a <s0><s1> blue bedroom, in the style of <s0><s1>' output: url: "image_2.png" - text: 'a <s0><s1> blue bedroom, in the style of <s0><s1>' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: room in the style of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - ralux3/sdxl-lora <Gallery /> ## Model description ### These are ralux3/sdxl-lora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`sdxl-lora.safetensors` here 💾](/ralux3/sdxl-lora/blob/main/sdxl-lora.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:sdxl-lora:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`sdxl-lora_emb.safetensors` here 💾](/ralux3/sdxl-lora/blob/main/sdxl-lora_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `sdxl-lora_emb` to your prompt. For example, `room in the style of sdxl-lora_emb` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('ralux3/sdxl-lora', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='ralux3/sdxl-lora', filename='sdxl-lora_emb.safetensors', repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('a <s0><s1> blue bedroom, in the style of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/ralux3/sdxl-lora/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
khizer-kt/Embeded_llama2_7b_chat_UG_Handbook
khizer-kt
2024-03-11T18:45:27Z
63
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-03-11T18:42:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
edgilr/clasificador-rotten-tomatoes-funnel-transfomer
edgilr
2024-03-11T18:43:45Z
77
0
transformers
[ "transformers", "safetensors", "funnel", "text-classification", "classification", "generated_from_trainer", "base_model:funnel-transformer/small-base", "base_model:finetune:funnel-transformer/small-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-11T18:43:27Z
--- license: apache-2.0 base_model: funnel-transformer/small-base tags: - classification - generated_from_trainer metrics: - accuracy model-index: - name: clasificador-rotten-tomatoes-funnel-transfomer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-rotten-tomatoes-funnel-transfomer This model is a fine-tuned version of [funnel-transformer/small-base](https://huggingface.co/funnel-transformer/small-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4962 - Accuracy: 0.8856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4326 | 1.0 | 1067 | 0.4819 | 0.8471 | | 0.2963 | 2.0 | 2134 | 0.4710 | 0.8856 | | 0.1752 | 3.0 | 3201 | 0.4962 | 0.8856 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Holarissun/gptj6b-aisft-gsm8k-seq
Holarissun
2024-03-11T18:43:29Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:EleutherAI/gpt-j-6b", "base_model:adapter:EleutherAI/gpt-j-6b", "license:apache-2.0", "region:us" ]
null
2024-03-11T18:43:09Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: EleutherAI/gpt-j-6b model-index: - name: gptj6b-aisft-gsm8k-seq results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gptj6b-aisft-gsm8k-seq This model is a fine-tuned version of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
EncryptedBinary/results_modified_7b_p1_5epoch
EncryptedBinary
2024-03-11T18:40:31Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-16T08:22:46Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
pmu/my-vase-acb
pmu
2024-03-11T18:33:49Z
0
1
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-03-11T18:25:20Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Vase-acb Dreambooth model trained by pmu following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: 4MC22IS075 Sample pictures of this concept: ![0](https://huggingface.co/pmu/my-vase-acb/resolve/main/sample_images/acb_output(1).png)
gate369/Blurred-Beagle-7b-slerp
gate369
2024-03-11T18:33:47Z
26
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "alnrg2arg/blockchainlabs_7B_merged_test2_4", "222gate/BrurryDog-7b-v0.1", "conversational", "base_model:alnrg2arg/blockchainlabs_7B_merged_test2_4", "base_model:merge:alnrg2arg/blockchainlabs_7B_merged_test2_4", "base_model:gate369/BrurryDog-7b-v0.1", "base_model:merge:gate369/BrurryDog-7b-v0.1", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-20T20:00:25Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - alnrg2arg/blockchainlabs_7B_merged_test2_4 - 222gate/BrurryDog-7b-v0.1 base_model: - alnrg2arg/blockchainlabs_7B_merged_test2_4 - 222gate/BrurryDog-7b-v0.1 model-index: - name: Blurred-Beagle-7b-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.78 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=222gate/Blurred-Beagle-7b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.58 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=222gate/Blurred-Beagle-7b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.95 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=222gate/Blurred-Beagle-7b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 69.39 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=222gate/Blurred-Beagle-7b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=222gate/Blurred-Beagle-7b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=222gate/Blurred-Beagle-7b-slerp name: Open LLM Leaderboard --- # Blurred-Beagle-7b-slerp Blurred-Beagle-7b-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [alnrg2arg/blockchainlabs_7B_merged_test2_4](https://huggingface.co/alnrg2arg/blockchainlabs_7B_merged_test2_4) * [222gate/BrurryDog-7b-v0.1](https://huggingface.co/222gate/BrurryDog-7b-v0.1) ## 🧩 Configuration ```yaml slices: - sources: - model: alnrg2arg/blockchainlabs_7B_merged_test2_4 layer_range: [0, 32] - model: 222gate/BrurryDog-7b-v0.1 layer_range: [0, 32] merge_method: slerp base_model: alnrg2arg/blockchainlabs_7B_merged_test2_4 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "222gate/Blurred-Beagle-7b-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_222gate__Blurred-Beagle-7b-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |74.80| |AI2 Reasoning Challenge (25-Shot)|72.78| |HellaSwag (10-Shot) |88.58| |MMLU (5-Shot) |64.95| |TruthfulQA (0-shot) |69.39| |Winogrande (5-shot) |83.19| |GSM8k (5-shot) |69.90|
liminerity/Blur-7B-slerp-v0.1
liminerity
2024-03-11T18:33:40Z
1,381
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/Marcoro14-7B-slerp", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-14T22:13:06Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - OpenPipe/mistral-ft-optimized-1218 - mlabonne/Marcoro14-7B-slerp model-index: - name: Blur-7B-slerp-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 68.77 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7B-slerp-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.58 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7B-slerp-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.18 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7B-slerp-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 60.64 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7B-slerp-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.14 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7B-slerp-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 72.1 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7B-slerp-v0.1 name: Open LLM Leaderboard --- things are bout' to get blurry # Blur-7B-slerp-v0.1 Blur-7B-slerp-v0.1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - model: mlabonne/Marcoro14-7B-slerp layer_range: [0, 32] merge_method: slerp base_model: mlabonne/Marcoro14-7B-slerp parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "222gate/Blur-7B-slerp-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__Blur-7B-slerp-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |72.40| |AI2 Reasoning Challenge (25-Shot)|68.77| |HellaSwag (10-Shot) |86.58| |MMLU (5-Shot) |65.18| |TruthfulQA (0-shot) |60.64| |Winogrande (5-shot) |81.14| |GSM8k (5-shot) |72.10|
liminerity/dhbacmes-3b-slerp
liminerity
2024-03-11T18:33:27Z
137
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "liminerity/herbaccbaccules-3b-slerp", "KnutJaegersberg/Deita-2b", "conversational", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-25T04:26:39Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - liminerity/herbaccbaccules-3b-slerp - KnutJaegersberg/Deita-2b model-index: - name: dhbacmes-3b-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 45.22 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/dhbacmes-3b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 70.77 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/dhbacmes-3b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 52.94 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/dhbacmes-3b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 40.41 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/dhbacmes-3b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 65.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/dhbacmes-3b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 43.67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/dhbacmes-3b-slerp name: Open LLM Leaderboard --- # dhbacmes-3b-slerp dhbacmes-3b-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [liminerity/herbaccbaccules-3b-slerp](https://huggingface.co/liminerity/herbaccbaccules-3b-slerp) * [KnutJaegersberg/Deita-2b](https://huggingface.co/KnutJaegersberg/Deita-2b) ## 🧩 Configuration ```yaml slices: - sources: - model: liminerity/herbaccbaccules-3b-slerp layer_range: [0, 40] - model: KnutJaegersberg/Deita-2b layer_range: [0, 40] merge_method: slerp base_model: liminerity/herbaccbaccules-3b-slerp parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__dhbacmes-3b-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |53.02| |AI2 Reasoning Challenge (25-Shot)|45.22| |HellaSwag (10-Shot) |70.77| |MMLU (5-Shot) |52.94| |TruthfulQA (0-shot) |40.41| |Winogrande (5-shot) |65.11| |GSM8k (5-shot) |43.67|
liminerity/Neurotic-Jomainotrik-7b-slerp
liminerity
2024-03-11T18:32:40Z
58
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "liminerity/merge", "bardsai/jaskier-7b-dpo-v5.6", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-25T21:00:27Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - liminerity/merge - bardsai/jaskier-7b-dpo-v5.6 model-index: - name: Neurotic-Jomainotrik-7b-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.95 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Neurotic-Jomainotrik-7b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.15 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Neurotic-Jomainotrik-7b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.28 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Neurotic-Jomainotrik-7b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 77.64 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Neurotic-Jomainotrik-7b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 85.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Neurotic-Jomainotrik-7b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 68.99 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Neurotic-Jomainotrik-7b-slerp name: Open LLM Leaderboard --- # Neurotic-Jomainotrik-7b-slerp Neurotic-Jomainotrik-7b-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [liminerity/merge](https://huggingface.co/liminerity/merge) * [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) ## 🧩 Configuration ```yaml slices: - sources: - model: liminerity/merge layer_range: [0, 32] - model: bardsai/jaskier-7b-dpo-v5.6 layer_range: [0, 32] merge_method: slerp base_model: liminerity/merge parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__Neurotic-Jomainotrik-7b-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |76.40| |AI2 Reasoning Challenge (25-Shot)|72.95| |HellaSwag (10-Shot) |89.15| |MMLU (5-Shot) |64.28| |TruthfulQA (0-shot) |77.64| |Winogrande (5-shot) |85.40| |GSM8k (5-shot) |68.99|
liminerity/mm4-3b
liminerity
2024-03-11T18:32:26Z
227
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "dataset:teknium/GPT4-LLM-Cleaned", "dataset:vicgalle/alpaca-gpt4", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-27T02:20:46Z
--- license: apache-2.0 datasets: - teknium/GPT4-LLM-Cleaned - vicgalle/alpaca-gpt4 model-index: - name: mm4-3b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 44.8 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/mm4-3b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 70.41 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/mm4-3b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 50.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/mm4-3b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 43.2 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/mm4-3b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 66.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/mm4-3b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 43.82 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/mm4-3b name: Open LLM Leaderboard --- MM4-3b a llama based model i made thru extensive training and merging ill explain later i literally made so many models today Title: Divergent Knowledge Enhancement through Retrograde Merging Strategies: Redefining Accuracy Perspectives in Language Model Evolution Abstract: Have you picked up any bad habits, or have you ever learned to do something incorrectly, only to realize you must completly relearn whatever it is you're trying to accomplish? In this proposal, we present an innovative and unconventional approach to enhancing the performance and knowledge base of natural language models. Our proposed method, titled 'Divergent Knowledge Enhancement through Retrograde Merging Strategies' (DKE-RS), aims to challenge traditional practices in model development by incorporating a deliberate back-and-forth merger between high and low accuracy language models. The initial conceptualization of DKE-RS stemmed from the realization that learning often encompasses both acquisition and unlearning, as encapsulated by the quote, "learning is just as sacred as unlearning." The proposed technique commences with a baseline model, 'blur-7b,' attaining an accuracy rate of 72.1%, subsequently merged with a Mistral fine-tuned model on the Dolphin dataset, only achieving a 46% accuracy level. By deliberately merging with less accurate models and retracing the evolutionary process, DKE-RS aims to broaden the knowledge base of the resulting model. This strategy, dubbed 'making the bad good,' intentionally degrades the initial accuracy in an effort to refine it, thus breaking conventional iterative improvements for innovative progression. image/png The DKE-RS method challenges the status quo by not solely relying on a linear enhancement trajectory, instead adopting a more holistic and diverse approach. We anticipate that this non-linear merger process will further diversify the model's knowledge base, thereby creating a more resilient and well-rounded language generation tool, capable of handling complex contexts with a broader understanding. Through thorough experimentation and analysis, we plan to assess the effectiveness and potential drawbacks of DKE-RS, comparing it to traditional merging techniques. The results from such evaluations will provide valuable insights into the efficacy of this divergent strategy in the landscape of natural language model development. We posit that the Divergent Knowledge Enhancement through Retrograde Merging Strategies approach contributes a significant and compelling step forward in the field, provoking thought-provoking discourse about the nature of accuracy refinement and model progression. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__mm4-3b) | Metric |Value| |---------------------------------|----:| |Avg. |53.22| |AI2 Reasoning Challenge (25-Shot)|44.80| |HellaSwag (10-Shot) |70.41| |MMLU (5-Shot) |50.90| |TruthfulQA (0-shot) |43.20| |Winogrande (5-shot) |66.22| |GSM8k (5-shot) |43.82|
gate369/BrurryDog-7b-v0.1
gate369
2024-03-11T18:31:21Z
11
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "udkai/Turdus", "leveldevai/TurdusBeagle-7B", "liminerity/Blur-7b-v1.21", "base_model:leveldevai/TurdusBeagle-7B", "base_model:merge:leveldevai/TurdusBeagle-7B", "base_model:liminerity/Blur-7b-v1.21", "base_model:merge:liminerity/Blur-7b-v1.21", "base_model:udkai/Turdus", "base_model:merge:udkai/Turdus", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-20T00:40:44Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - udkai/Turdus - leveldevai/TurdusBeagle-7B - liminerity/Blur-7b-v1.21 base_model: - udkai/Turdus - leveldevai/TurdusBeagle-7B - liminerity/Blur-7b-v1.21 model-index: - name: BrurryDog-7b-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.53 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=222gate/BrurryDog-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.37 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=222gate/BrurryDog-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=222gate/BrurryDog-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 70.05 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=222gate/BrurryDog-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.87 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=222gate/BrurryDog-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 66.87 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=222gate/BrurryDog-7b-v0.1 name: Open LLM Leaderboard --- # BrurryDog-7b-v0.1 BrurryDog-7b-v0.1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [udkai/Turdus](https://huggingface.co/udkai/Turdus) * [leveldevai/TurdusBeagle-7B](https://huggingface.co/leveldevai/TurdusBeagle-7B) * [liminerity/Blur-7b-v1.21](https://huggingface.co/liminerity/Blur-7b-v1.21) ## 🧩 Configuration ```yaml models: - model: udkai/Turdus parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 - model: leveldevai/TurdusBeagle-7B parameters: density: 0.5 weight: [0, 0.3, 0.7, 1] # weight gradient - model: liminerity/Blur-7b-v1.21 parameters: density: 0.33 weight: - filter: mlp value: 0.5 - value: 0 merge_method: ties base_model: udkai/Turdus parameters: normalize: true int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "222gate/BrurryDog-7b-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_222gate__BrurryDog-7b-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |74.24| |AI2 Reasoning Challenge (25-Shot)|72.53| |HellaSwag (10-Shot) |88.37| |MMLU (5-Shot) |64.74| |TruthfulQA (0-shot) |70.05| |Winogrande (5-shot) |82.87| |GSM8k (5-shot) |66.87|
liminerity/Blur-7b-v1.21
liminerity
2024-03-11T18:30:17Z
49
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "udkai/Turdus", "decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP", "liminerity/Blur-7b-v1.2", "base_model:decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP", "base_model:merge:decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP", "base_model:liminerity/Blur-7b-v1.2", "base_model:merge:liminerity/Blur-7b-v1.2", "base_model:udkai/Turdus", "base_model:merge:udkai/Turdus", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-18T04:30:23Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - udkai/Turdus - decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP - liminerity/Blur-7b-v1.2 base_model: - udkai/Turdus - decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP - liminerity/Blur-7b-v1.2 model-index: - name: Blur-7b-v1.21 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.82 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.21 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.07 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.21 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.85 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.21 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 67.99 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.21 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.82 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.21 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.52 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.21 name: Open LLM Leaderboard --- # Blur-7b-v1.21 Blur-7b-v1.21 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [udkai/Turdus](https://huggingface.co/udkai/Turdus) * [decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP](https://huggingface.co/decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP) * [liminerity/Blur-7b-v1.2](https://huggingface.co/liminerity/Blur-7b-v1.2) ## 🧩 Configuration ```yaml models: - model: udkai/Turdus parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 - model: decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP parameters: density: 0.5 weight: [0, 0.3, 0.7, 1] # weight gradient - model: liminerity/Blur-7b-v1.2 parameters: density: 0.33 weight: - filter: mlp value: 0.5 - value: 0 merge_method: ties base_model: fblgit/UNA-TheBeagle-7b-v1 parameters: normalize: true int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "liminerity/Blur-7b-v1.21" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__Blur-7b-v1.21) | Metric |Value| |---------------------------------|----:| |Avg. |74.18| |AI2 Reasoning Challenge (25-Shot)|70.82| |HellaSwag (10-Shot) |88.07| |MMLU (5-Shot) |64.85| |TruthfulQA (0-shot) |67.99| |Winogrande (5-shot) |83.82| |GSM8k (5-shot) |69.52|
franklee1015/q-FrozenLake-v1-4x4-noSlippery
franklee1015
2024-03-11T18:24:40Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-28T12:12:11Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage model = load_from_hub(repo_id="franklee1015/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
ferrazzipietro/Qwen1.5-7B-Chat__adapters_en.layer1_8_torch.bfloat16_64_64_0.01_4_0.0002
ferrazzipietro
2024-03-11T18:19:50Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-11T18:19:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
suban244/muRIL-squad-nep-translated-squad
suban244
2024-03-11T18:15:48Z
27
1
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:suban244/muRIL-squad", "base_model:finetune:suban244/muRIL-squad", "endpoints_compatible", "region:us" ]
question-answering
2023-12-11T08:57:32Z
--- base_model: suban244/muRIL-squad tags: - generated_from_trainer model-index: - name: muRIL-squad-nep-translated-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # muRIL-squad-nep-translated-squad This model is a fine-tuned version of [suban244/muRIL-squad](https://huggingface.co/suban244/muRIL-squad) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.15.0
franklee1015/dqn-SpaceInvadersNoFrameskip-v4
franklee1015
2024-03-11T18:15:19Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-07T02:59:47Z
--- tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning model-index: - name: dqn-SpaceInvadersNoFrameskip-v4 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 877.00 +/- 176.35 name: mean_reward verified: false library_name: stable-baselines3 --- # **Deep Q Learning** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **Deep Q Learning** agent playing **SpaceInvadersNoFrameskip-v4** .
shubham-krishna/peft-gemma-2b-dolly
shubham-krishna
2024-03-11T18:14:45Z
1
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "license:other", "region:us" ]
null
2024-03-11T18:14:38Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: google/gemma-2b datasets: - generator model-index: - name: peft-gemma-2b-dolly results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # peft-gemma-2b-dolly This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - PEFT 0.8.2 - Transformers 4.39.0.dev0 - Pytorch 2.3.0 - Datasets 2.17.1 - Tokenizers 0.15.2
Artefact2/Mixtral-8x7B-v0.1-GGUF
Artefact2
2024-03-11T18:12:59Z
118
2
null
[ "gguf", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-02-26T16:39:22Z
--- language: - en license: apache-2.0 --- These are GGUF quantized versions of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1). The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`. Some model files above 50GB are split into smaller files. To concatenate them, use the `cat` command (on Windows, use PowerShell): `cat foo-Q6_K.gguf.* > foo-Q6_K.gguf` * What quant do I need? See https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 * Quant requests? Just open a discussion in the community tabs.
ferrazzipietro/Qwen1.5-7B-Chat__adapters_en.layer1_8_torch.bfloat16_64_64_0.01_2_0.0002
ferrazzipietro
2024-03-11T18:12:05Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-11T18:11:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
automerger/Experiment24Yam-7B
automerger
2024-03-11T18:04:40Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:mayacinka/yam-jom-7B", "base_model:merge:mayacinka/yam-jom-7B", "base_model:yam-peleg/Experiment24-7B", "base_model:merge:yam-peleg/Experiment24-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-08T16:52:03Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - automerger base_model: - yam-peleg/Experiment24-7B - mayacinka/yam-jom-7B --- # Experiment24Yam-7B Experiment24Yam-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [yam-peleg/Experiment24-7B](https://huggingface.co/yam-peleg/Experiment24-7B) * [mayacinka/yam-jom-7B](https://huggingface.co/mayacinka/yam-jom-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: yam-peleg/Experiment24-7B layer_range: [0, 32] - model: mayacinka/yam-jom-7B layer_range: [0, 32] merge_method: slerp base_model: yam-peleg/Experiment24-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/Experiment24Yam-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
mouss217/mistral-7b-chatgptprompts
mouss217
2024-03-11T18:03:50Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-11T17:54:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aegunal/FT_IPD_gemma7b
aegunal
2024-03-11T18:02:53Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-11T18:02:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Draichi/Taxi-v3-Qlearning
Draichi
2024-03-11T18:01:15Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-11T18:01:13Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-Qlearning results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Draichi/Taxi-v3-Qlearning", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
NikolayKozloff/occiglot-7b-es-en-GGUF
NikolayKozloff
2024-03-11T17:59:12Z
2
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-03-11T17:36:26Z
GGUF for this model: https://huggingface.co/occiglot/occiglot-7b-es-en ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6481cb26ec65b8b77d8641a0/05VLRT64dmkyZALqj_69v.png) Occiglot-7B-ES-EN is a generative language model with 7B parameters for Spanish and English and trained by the Occiglot Research Collective. It is based on Mistral-7B-v0.1 and trained on 112B tokens of additional multilingual and code data with a block size of 8,192 tokens per sample. Note that the model is a general-purpose base model and was not instruction-fine-tuned nor optimized for chat or other applications.
sunilregmi/wav2vec2-base-openslr43-colab
sunilregmi
2024-03-11T17:58:16Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-11T17:30:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ferrazzipietro/Qwen1.5-7B-Chat__adapters_en.layer1_8_torch.bfloat16_64_32_0.01_4_0.0002
ferrazzipietro
2024-03-11T17:56:49Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-11T17:56:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ibunescu/Phi-2_GDPR_chapter_classifier_v5_adapter
ibunescu
2024-03-11T17:55:00Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-11T17:54:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Dharil/Llama-2-7b-finetune-legal-data
Dharil
2024-03-11T17:52:18Z
0
0
peft
[ "peft", "region:us" ]
null
2024-03-11T17:51:37Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
Thomstr/Taxi-v3
Thomstr
2024-03-11T17:50:36Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-11T17:36:06Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Thomstr/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
lkk688/detr-resnet-50_finetuned_coco
lkk688
2024-03-11T17:50:07Z
177
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2024-03-11T06:58:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Dharil/Llama-2-fine-tuned-on-legal-data
Dharil
2024-03-11T17:49:25Z
0
0
peft
[ "peft", "region:us" ]
null
2024-03-11T17:48:34Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
Thomstr/q-q-FrozenLake-v1-4x4-noSlippery_test
Thomstr
2024-03-11T17:49:00Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-11T15:55:24Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-q-FrozenLake-v1-4x4-noSlippery_test results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Thomstr/q-q-FrozenLake-v1-4x4-noSlippery_test", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
grianzeno/Aunt-Itoe
grianzeno
2024-03-11T17:43:00Z
0
0
null
[ "region:us" ]
null
2024-03-11T17:34:53Z
- text: >- AzumaFubuki, 1girl, mature female, completely nude, bedroom, standing, cowboy shot, perfect hands, perfect face, masterpiece, best quality, absurdres, long hair, hair over one eye, hair ribbon, straight-on, hands on hips, smile, curvy, mature female, thick thighs output: url: main/23401652.png
Weni/ZeroShot-3.4.2-Mistral-7b-DPO-1.0.0
Weni
2024-03-11T17:42:44Z
0
0
trl
[ "trl", "safetensors", "DPO", "ZeroShot", "en", "es", "pt", "base_model:Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged", "base_model:finetune:Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged", "license:mit", "region:us" ]
null
2024-03-11T17:06:46Z
--- license: mit library_name: "trl" tags: - DPO - ZeroShot base_model: Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged model-index: - name: Weni/ZeroShot-3.4.2-Mistral-7b-DPO-1.0.0 results: [] language: ['en', 'es', 'pt'] --- # Weni/ZeroShot-3.4.2-Mistral-7b-DPO-1.0.0 This model is a fine-tuned version of [Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged] on the dataset Weni/zeroshot-dpo-1.0.0 with the DPO trainer. It is part of the ZeroShot project for [Weni](https://weni.ai/). It achieves the following results on the evaluation set: {'eval_loss': 0.10210147500038147, 'eval_runtime': 27.5135, 'eval_samples_per_second': 2.217, 'eval_steps_per_second': 0.291, 'eval_rewards/chosen': 0.792843222618103, 'eval_rewards/rejected': -3.810342311859131, 'eval_rewards/accuracies': 0.953125, 'eval_rewards/margins': 4.603185176849365, 'eval_logps/rejected': -51.665706634521484, 'eval_logps/chosen': -8.38036823272705, 'eval_logits/rejected': -1.3307629823684692, 'eval_logits/chosen': -1.3801817893981934, 'epoch': 2.82} ## Intended uses & limitations This model has not been trained to avoid specific intructions. ## Training procedure Finetuning was done on the model Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged with the following prompt: ``` Portuguese: [INST] Você é muito especialista em classificar a frase do usuário em um chatbot sobre: {context} Pare, pense bem e responda com APENAS UM ÚNICO \`id\` da classe que melhor represente a intenção para a frase do usuário de acordo com a análise de seu contexto, responda APENAS com o \`id\` da classe só se você tiver muita certeza e não explique o motivo. Na ausência, falta de informações ou caso a frase do usuário não se enquadre em nenhuma classe, classifique como "-1". # Essas são as Classes com seus Id e Contexto: {all_classes} # Frase do usuário: {input} # Id da Classe: [/INST] Spanish: [INST] Eres muy experto en clasificar la frase del usuario en un chatbot sobre: {context} Deténgase, piense bien y responda con SOLO UN ÚNICO \`id\` de la clase que mejor represente la intención para la frase del usuario de acuerdo con el análisis de su contexto, responda SOLO con el \`id\` de la clase si está muy seguro y no explique el motivo. En ausencia, falta de información o en caso de que la frase del usuario no se ajuste a ninguna clase, clasifique como "-1". # Estas son las Clases con sus Id y Contexto: {all_classes} # Frase del usuario: {input} # Id de la Clase: [/INST] English: [INST] You are very expert in classifying the user sentence in a chatbot about: {context} Stop, think carefully, and respond with ONLY ONE SINGLE \`id\` of the class that best represents the intention for the user's sentence according to the analysis of its context, respond ONLY with the \`id\` of the class if you are very sure and do not explain the reason. In the absence, lack of information, or if the user's sentence does not fit into any class, classify as "-1". # These are the Classes and its Context: {all_classes} # User's sentence: {input} # Class Id: [/INST] Chosen_response: {chosen_response} Rejected_response: {rejected_response} ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - per_device_train_batch_size: 8 - per_device_eval_batch_size: 8 - gradient_accumulation_steps: 4 - num_gpus: 1 - total_train_batch_size: 32 - optimizer: AdamW - lr_scheduler_type: cosine - num_steps: 48 - quantization_type: bitsandbytes - LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 8\n - lora_alpha: 16\n - lora_dropout: 0.1\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']\n - task_type: CAUSAL_LM",) ### Training results ### Framework versions - transformers==4.38.2 - datasets==2.17.1 - peft==0.8.2 - safetensors==0.4.2 - evaluate==0.4.1 - bitsandbytes==0.42 - huggingface_hub==0.20.3 - seqeval==1.2.2 - optimum==1.17.1 - auto-gptq==0.7.0 - gpustat==1.1.1 - deepspeed==0.13.2 - wandb==0.16.3 - trl==0.7.11 - accelerate==0.27.2 - coloredlogs==15.0.1 - traitlets==5.14.1 - autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.0/autoawq-0.2.0+cu118-cp310-cp310-linux_x86_64.whl ### Hardware - Cloud provided: runpod.io
spar-ai/henry-LLM-epoch6-50dia-4bit
spar-ai
2024-03-11T17:42:27Z
64
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-03-11T17:38:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
StaAhmed/llama_lora_QA
StaAhmed
2024-03-11T17:29:29Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:NousResearch/Llama-2-7b-chat-hf", "base_model:finetune:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
2024-03-11T06:55:54Z
--- base_model: NousResearch/Llama-2-7b-chat-hf tags: - generated_from_trainer model-index: - name: llama_lora_QA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama_lora_QA This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.13.3
LarryAIDraw/Natsumi__Adult_Ver__-000017
LarryAIDraw
2024-03-11T17:28:02Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-03-11T17:13:24Z
--- license: creativeml-openrail-m --- https://civitai.com/models/343915/natsumi-adult-ver-date-a-live-lora
LarryAIDraw/CHAR-KirikoYukoku
LarryAIDraw
2024-03-11T17:27:44Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-03-11T17:11:38Z
--- license: creativeml-openrail-m --- https://civitai.com/models/342027/kiriko-yukoku-4-outfits-or-the-idolmster-shiny-colors