modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-04 06:27:36
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
466 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-04 06:25:54
card
stringlengths
11
1.01M
Mantis-VL/mantis-8b-idefics2-video-eval-refined-40k_4096_regression
Mantis-VL
2024-06-12T03:12:55Z
4
0
transformers
[ "transformers", "safetensors", "idefics2", "text-classification", "generated_from_trainer", "base_model:TIGER-Lab/Mantis-8B-Idefics2", "base_model:finetune:TIGER-Lab/Mantis-8B-Idefics2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-06-11T02:46:41Z
--- license: apache-2.0 base_model: TIGER-Lab/Mantis-8B-Idefics2 tags: - generated_from_trainer model-index: - name: mantis-8b-idefics2-video-eval-refined-40k_4096_regression results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mantis-8b-idefics2-video-eval-refined-40k_4096_regression This model is a fine-tuned version of [TIGER-Lab/Mantis-8B-Idefics2](https://huggingface.co/TIGER-Lab/Mantis-8B-Idefics2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
smcleod/meta-llama-3-lora-smcleod-golang-ollama-charm
smcleod
2024-06-12T03:09:50Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-12T03:09:34Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** smcleod - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
duyntnet/deepseek-coder-7b-instruct-v1.5-imatrix-GGUF
duyntnet
2024-06-12T03:05:04Z
338
1
transformers
[ "transformers", "gguf", "imatrix", "deepseek-coder-7b-instruct-v1.5", "text-generation", "en", "license:other", "region:us", "conversational" ]
text-generation
2024-06-12T00:19:41Z
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - deepseek-coder-7b-instruct-v1.5 --- Quantizations of https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5 # From original readme ### 3. How to Use Here give some examples of how to use our model. #### Chat Model Inference ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-7b-instruct-v1.5", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-7b-instruct-v1.5", trust_remote_code=True).cuda() messages=[ { 'role': 'user', 'content': "write a quick sort algorithm in python."} ] inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
chainup244/Qwen-Qwen1.5-1.8B-1718161018
chainup244
2024-06-12T02:59:03Z
134
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T02:57:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Minbyul/biomistral-7b-wo-kqa_golden-iter-dpo-step2
Minbyul
2024-06-12T02:58:33Z
12
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:Minbyul/biomistral-7b-wo-kqa_golden-iter-dpo-step1", "base_model:finetune:Minbyul/biomistral-7b-wo-kqa_golden-iter-dpo-step1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T02:32:40Z
--- license: apache-2.0 base_model: Minbyul/biomistral-7b-wo-kqa_golden-iter-dpo-step1 tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: biomistral-7b-wo-kqa_golden-iter-dpo-step2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biomistral-7b-wo-kqa_golden-iter-dpo-step2 This model is a fine-tuned version of [Minbyul/biomistral-7b-wo-kqa_golden-iter-dpo-step1](https://huggingface.co/Minbyul/biomistral-7b-wo-kqa_golden-iter-dpo-step1) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.6909 - Rewards/chosen: 0.0063 - Rewards/rejected: 0.0057 - Rewards/accuracies: 0.5625 - Rewards/margins: 0.0006 - Logps/rejected: -193.8717 - Logps/chosen: -168.4928 - Logits/rejected: -2.2060 - Logits/chosen: -2.9391 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
Kudod/model-massp-mnist
Kudod
2024-06-12T02:56:19Z
0
0
null
[ "safetensors", "region:us" ]
null
2024-06-12T02:56:16Z
# My MLP model This is my trained model demo for MaSSP.
donghuna/distilbert-base-uncased-finetuned-emotion
donghuna
2024-06-12T02:55:00Z
118
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-11T10:32:12Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9261477732487463 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2148 - Accuracy: 0.926 - F1: 0.9261 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3022 | 0.9085 | 0.9081 | | No log | 2.0 | 500 | 0.2148 | 0.926 | 0.9261 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
vwxyzjn/ppo_zephyr_vllm_2e-6_kl_0.02_num_mini_batches_1
vwxyzjn
2024-06-12T02:54:40Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:alignment-handbook/zephyr-7b-sft-full", "base_model:finetune:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T02:53:37Z
--- license: apache-2.0 base_model: alignment-handbook/zephyr-7b-sft-full tags: - generated_from_trainer model-index: - name: ppo_zephyr_vllm_2e-6_kl_0.02_num_mini_batches_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ppo_zephyr_vllm_2e-6_kl_0.02_num_mini_batches_1 This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 7 - gradient_accumulation_steps: 64 - total_train_batch_size: 448 - total_eval_batch_size: 56 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
dmo0798/based_trained_dilibert_sentiment_analysis
dmo0798
2024-06-12T02:49:17Z
122
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-12T02:48:58Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: based_trained_dilibert_sentiment_analysis results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # based_trained_dilibert_sentiment_analysis This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2706 - Accuracy: 0.902 - Confusion Matrix: [[194 46] [ 52 708]] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
hdve/Qwen-Qwen1.5-0.5B-1718159995
hdve
2024-06-12T02:41:01Z
136
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T02:40:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Augusto777/vit-base-patch16-224-ve-U11-b-40
Augusto777
2024-06-12T02:40:01Z
196
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-12T01:46:44Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-ve-U11-b-40 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.8478260869565217 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-ve-U11-b-40 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6399 - Accuracy: 0.8478 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.92 | 6 | 1.3827 | 0.3913 | | 1.3824 | 2.0 | 13 | 1.3319 | 0.6087 | | 1.3824 | 2.92 | 19 | 1.2476 | 0.5435 | | 1.3034 | 4.0 | 26 | 1.1450 | 0.5217 | | 1.1431 | 4.92 | 32 | 1.0679 | 0.5435 | | 1.1431 | 6.0 | 39 | 1.0006 | 0.6087 | | 1.0123 | 6.92 | 45 | 0.9617 | 0.6522 | | 0.8798 | 8.0 | 52 | 0.8575 | 0.7609 | | 0.8798 | 8.92 | 58 | 0.8074 | 0.6957 | | 0.7538 | 10.0 | 65 | 0.7447 | 0.7826 | | 0.6115 | 10.92 | 71 | 0.7204 | 0.7826 | | 0.6115 | 12.0 | 78 | 0.6399 | 0.8478 | | 0.5009 | 12.92 | 84 | 0.5726 | 0.8478 | | 0.389 | 14.0 | 91 | 0.5825 | 0.8478 | | 0.389 | 14.92 | 97 | 0.6231 | 0.7609 | | 0.3348 | 16.0 | 104 | 0.5510 | 0.8478 | | 0.2616 | 16.92 | 110 | 0.5070 | 0.8478 | | 0.2616 | 18.0 | 117 | 0.5040 | 0.8261 | | 0.2188 | 18.92 | 123 | 0.5738 | 0.7826 | | 0.2078 | 20.0 | 130 | 0.5398 | 0.8043 | | 0.2078 | 20.92 | 136 | 0.5334 | 0.7826 | | 0.2165 | 22.0 | 143 | 0.6043 | 0.7826 | | 0.2165 | 22.92 | 149 | 0.5817 | 0.8043 | | 0.1645 | 24.0 | 156 | 0.6465 | 0.7391 | | 0.1413 | 24.92 | 162 | 0.6638 | 0.8043 | | 0.1413 | 26.0 | 169 | 0.5710 | 0.8261 | | 0.141 | 26.92 | 175 | 0.6494 | 0.8043 | | 0.1313 | 28.0 | 182 | 0.7649 | 0.6957 | | 0.1313 | 28.92 | 188 | 0.6130 | 0.7609 | | 0.14 | 30.0 | 195 | 0.6718 | 0.7609 | | 0.1284 | 30.92 | 201 | 0.6660 | 0.8261 | | 0.1284 | 32.0 | 208 | 0.6286 | 0.7826 | | 0.1135 | 32.92 | 214 | 0.6424 | 0.8043 | | 0.1024 | 34.0 | 221 | 0.6339 | 0.8043 | | 0.1024 | 34.92 | 227 | 0.6132 | 0.8043 | | 0.1108 | 36.0 | 234 | 0.5975 | 0.8478 | | 0.0944 | 36.92 | 240 | 0.5981 | 0.8478 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
dellaanima/llama2_7b_hf_LoRA_FT_merged_seq_len_128_wikitext2
dellaanima
2024-06-12T02:37:35Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T02:23:01Z
## Model Performance - **Validation Loss:** 1.984 - **Validation Perplexity:** 7.835 ## Model Configuration - **LoRA FT:** Applied to `self_attn.q_proj` and `self_attn.v_proj`, Rank = 16 - **Epochs:** 3 - **Learning Rate:** 0.00001 - **Batch Size:** 8 - **Sequence Length:** 128
DBangshu/GPT2_5_2
DBangshu
2024-06-12T02:36:25Z
136
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T02:36:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bella05/pogny-16-0.00002-all
bella05
2024-06-12T02:35:55Z
108
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "base_model:klue/roberta-large", "base_model:finetune:klue/roberta-large", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-12T00:51:48Z
--- base_model: klue/roberta-large tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: pogny-16-0.00002-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pogny-16-0.00002-all This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5154 - Accuracy: 0.7210 - F1: 0.7193 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.2906 | 1.0 | 5108 | 1.0767 | 0.7189 | 0.7147 | | 0.2002 | 2.0 | 10216 | 1.1983 | 0.7199 | 0.7181 | | 0.1143 | 3.0 | 15324 | 1.5154 | 0.7210 | 0.7193 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0a0+b5021ba - Datasets 2.6.2 - Tokenizers 0.14.1
chainup244/Qwen-Qwen1.5-0.5B-1718159419
chainup244
2024-06-12T02:35:16Z
134
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T02:30:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hdve/Qwen-Qwen1.5-7B-1718158702
hdve
2024-06-12T02:18:24Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "region:us" ]
null
2024-06-12T02:18:22Z
--- library_name: peft base_model: Qwen/Qwen1.5-7B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
hdve/Qwen-Qwen1.5-0.5B-1718158477
hdve
2024-06-12T02:14:46Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
2024-06-12T02:14:37Z
--- library_name: peft base_model: Qwen/Qwen1.5-0.5B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
stiucsib/gemma_kto_goat_ch1000
stiucsib
2024-06-12T02:12:54Z
133
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T02:11:34Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AvinashAmballa/results
AvinashAmballa
2024-06-12T01:58:54Z
29
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-06-12T01:41:40Z
--- license: creativeml-openrail-m library_name: diffusers tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers base_model: CompVis/stable-diffusion-v1-4 inference: true instance_prompt: a photo of sks dog --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - AvinashAmballa/results This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
stiucsib/gemma_kto_goat_ch3
stiucsib
2024-06-12T01:54:28Z
134
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T01:52:59Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
datek/Qwen-Qwen1.5-0.5B-1718156953
datek
2024-06-12T01:49:15Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
2024-06-12T01:49:13Z
--- library_name: peft base_model: Qwen/Qwen1.5-0.5B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
tundao/Qwen-Qwen1.5-7B-1718156786
tundao
2024-06-12T01:46:30Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "region:us" ]
null
2024-06-12T01:46:26Z
--- library_name: peft base_model: Qwen/Qwen1.5-7B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
tundao/Qwen-Qwen1.5-1.8B-1718156470
tundao
2024-06-12T01:41:13Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
2024-06-12T01:41:10Z
--- library_name: peft base_model: Qwen/Qwen1.5-1.8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
mynameisdidit/fine-tuned-paraphrase-bert-en
mynameisdidit
2024-06-12T01:37:32Z
108
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-12T01:16:45Z
Model Card for Model ID Model Details Developed by: Ditoprasetyo Rusharsono Soemarso Model type: [BERT] License: [License under which the model is distributed, e.g., Apache License 2.0] Finetuned from model: [If applicable, mention the pre-trained model used for fine-tuning] Evaluation Metrics Accuracy: Approximately 84.31% F1 Score: Approximately 0.8877 Training Results Global Step: 1377 Training Loss: Approximately 0.2528 Training Runtime: Approximately 252.901 seconds (or about 4 minutes and 13 seconds) Train Samples per Second: Approximately 43.511 Train Steps per Second: Approximately 5.445 Total FLOPs: Approximately 4.05e14 FLOPs Epochs: Completed 3 epochs
postitive666/Llama3-Instruct-8B-SimPO
postitive666
2024-06-12T01:30:02Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2405.14734", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-11T10:21:22Z
This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)* Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.
talli96123/meat_calssify_fresh_no_crop_V_0_1_best
talli96123
2024-06-12T01:29:52Z
193
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-12T01:26:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
esraa-sayed/unsloth-mistral-tuned
esraa-sayed
2024-06-12T01:27:57Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-12T01:26:44Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** esraa-sayed - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Sharan1712/llama2_7B_alpaca_qdora_4bit_5b
Sharan1712
2024-06-12T01:26:16Z
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-06-12T01:23:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abhayesian/LLama2_HarmBench_NoAttack_3
abhayesian
2024-06-12T01:25:50Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-11T21:56:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
datek/google-gemma-2b-1718155438
datek
2024-06-12T01:24:00Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
2024-06-12T01:23:58Z
--- library_name: peft base_model: google/gemma-2b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
blockblockblock/Qwen2-72B-Instruct-bpw4.2-exl2
blockblockblock
2024-06-12T01:23:40Z
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "arxiv:2309.00071", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-06-12T00:51:04Z
--- license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen2-72B-Instruct ## Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 72B Qwen2 model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. Qwen2-72B-Instruct supports a context length of up to 131,072 tokens, enabling the processing of extensive inputs. Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2 for handling long texts. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). <br> ## Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-72B-Instruct", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-72B-Instruct") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### Processing Long Texts To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For deployment, we recommend using vLLM. You can enable the long-context capabilities by following these steps: 1. **Install vLLM**: You can install vLLM by running the following command. ```bash pip install "vllm>=0.4.3" ``` Or you can install vLLM from [source](https://github.com/vllm-project/vllm/). 2. **Configure Model Settings**: After downloading the model weights, modify the `config.json` file by including the below snippet: ```json { "architectures": [ "Qwen2ForCausalLM" ], // ... "vocab_size": 152064, // adding the following snippets "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } ``` This snippet enable YARN to support longer contexts. 3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command: ```bash python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-72B-Instruct --model path/to/weights ``` Then you can access the Chat API by: ```bash curl http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "Qwen2-72B-Instruct", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Your Long Input Here."} ] }' ``` For further usage instructions of vLLM, please refer to our [Github](https://github.com/QwenLM/Qwen2). **Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required. ## Evaluation We briefly compare Qwen2-72B-Instruct with similar-sized instruction-tuned LLMs, including our previous Qwen1.5-72B-Chat. The results are shown as follows: | Datasets | Llama-3-70B-Instruct | Qwen1.5-72B-Chat | **Qwen2-72B-Instruct** | | :--- | :---: | :---: | :---: | | _**English**_ | | | | | MMLU | 82.0 | 75.6 | **82.3** | | MMLU-Pro | 56.2 | 51.7 | **64.4** | | GPQA | 41.9 | 39.4 | **42.4** | | TheroemQA | 42.5 | 28.8 | **44.4** | | MT-Bench | 8.95 | 8.61 | **9.12** | | Arena-Hard | 41.1 | 36.1 | **48.1** | | IFEval (Prompt Strict-Acc.) | 77.3 | 55.8 | **77.6** | | _**Coding**_ | | | | | HumanEval | 81.7 | 71.3 | **86.0** | | MBPP | **82.3** | 71.9 | 80.2 | | MultiPL-E | 63.4 | 48.1 | **69.2** | | EvalPlus | 75.2 | 66.9 | **79.0** | | LiveCodeBench | 29.3 | 17.9 | **35.7** | | _**Mathematics**_ | | | | | GSM8K | **93.0** | 82.7 | 91.1 | | MATH | 50.4 | 42.5 | **59.7** | | _**Chinese**_ | | | | | C-Eval | 61.6 | 76.1 | **83.8** | | AlignBench | 7.42 | 7.28 | **8.27** | ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen2, title={Qwen2 Technical Report}, year={2024} } ```
datek/Qwen-Qwen1.5-7B-1718155393
datek
2024-06-12T01:23:15Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "region:us" ]
null
2024-06-12T01:23:13Z
--- library_name: peft base_model: Qwen/Qwen1.5-7B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
acastelan/llama38binstruct_summarize
acastelan
2024-06-12T01:22:27Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "base_model:adapter:NousResearch/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-06-12T01:22:15Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: NousResearch/Meta-Llama-3-8B-Instruct datasets: - generator model-index: - name: llama38binstruct_summarize results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama38binstruct_summarize This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 2.3836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.4136 | 1.3158 | 25 | 1.7232 | | 0.4308 | 2.6316 | 50 | 1.9632 | | 0.2186 | 3.9474 | 75 | 2.0669 | | 0.0954 | 5.2632 | 100 | 2.3836 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
bartowski/L3-8B-Stheno-v3.2-GGUF
bartowski
2024-06-12T01:21:33Z
2,829
14
null
[ "gguf", "text-generation", "en", "dataset:Gryphe/Opus-WritingPrompts", "dataset:Sao10K/Claude-3-Opus-Instruct-15K", "dataset:Sao10K/Short-Storygen-v2", "dataset:Sao10K/c2-Logs-Filtered", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-06-12T01:04:39Z
--- license: cc-by-nc-4.0 language: - en datasets: - Gryphe/Opus-WritingPrompts - Sao10K/Claude-3-Opus-Instruct-15K - Sao10K/Short-Storygen-v2 - Sao10K/c2-Logs-Filtered quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of L3-8B-Stheno-v3.2 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3130">b3130</a> for quantization. Original model: https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2 All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [L3-8B-Stheno-v3.2-Q8_0.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [L3-8B-Stheno-v3.2-Q6_K.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [L3-8B-Stheno-v3.2-Q5_K_M.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [L3-8B-Stheno-v3.2-Q5_K_S.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [L3-8B-Stheno-v3.2-Q4_K_M.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [L3-8B-Stheno-v3.2-Q4_K_S.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [L3-8B-Stheno-v3.2-IQ4_XS.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [L3-8B-Stheno-v3.2-Q3_K_L.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [L3-8B-Stheno-v3.2-Q3_K_M.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [L3-8B-Stheno-v3.2-IQ3_M.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [L3-8B-Stheno-v3.2-Q3_K_S.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [L3-8B-Stheno-v3.2-IQ3_XS.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [L3-8B-Stheno-v3.2-IQ3_XXS.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [L3-8B-Stheno-v3.2-Q2_K.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [L3-8B-Stheno-v3.2-IQ2_M.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [L3-8B-Stheno-v3.2-IQ2_S.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [L3-8B-Stheno-v3.2-IQ2_XS.gguf](https://huggingface.co/bartowski/L3-8B-Stheno-v3.2-GGUF/blob/main/L3-8B-Stheno-v3.2-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/L3-8B-Stheno-v3.2-GGUF --include "L3-8B-Stheno-v3.2-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/L3-8B-Stheno-v3.2-GGUF --include "L3-8B-Stheno-v3.2-Q8_0.gguf/*" --local-dir L3-8B-Stheno-v3.2-Q8_0 ``` You can either specify a new local-dir (L3-8B-Stheno-v3.2-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
datek/Qwen-Qwen1.5-1.8B-1718155262
datek
2024-06-12T01:21:05Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
2024-06-12T01:21:03Z
--- library_name: peft base_model: Qwen/Qwen1.5-1.8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
datek/Qwen-Qwen1.5-0.5B-1718155209
datek
2024-06-12T01:20:11Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
2024-06-12T01:20:09Z
--- library_name: peft base_model: Qwen/Qwen1.5-0.5B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
mussed/test-trainer
mussed
2024-06-12T01:12:30Z
107
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-12T01:05:57Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: test-trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-trainer This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5830 - Accuracy: 0.8529 - F1: 0.8966 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.4085 | 0.8456 | 0.8844 | | No log | 2.0 | 460 | 0.3548 | 0.8480 | 0.8916 | | 0.3957 | 3.0 | 690 | 0.5830 | 0.8529 | 0.8966 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1
martimfasantos/tinyllama-1.1b-sum-dpo-full_LR2e-7_3epochs_old
martimfasantos
2024-06-12T01:10:21Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "dataset:openai/summarize_from_feedback", "base_model:martimfasantos/tinyllama-1.1b-sum-sft-full_old", "base_model:finetune:martimfasantos/tinyllama-1.1b-sum-sft-full_old", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-11T01:02:56Z
--- license: apache-2.0 base_model: martimfasantos/tinyllama-1.1b-sum-sft-full_old tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - openai/summarize_from_feedback model-index: - name: tinyllama-1.1b-sum-dpo-full_LR2e-7_3epochs_old results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-1.1b-sum-dpo-full_LR2e-7_3epochs_old This model is a fine-tuned version of [martimfasantos/tinyllama-1.1b-sum-sft-full_old](https://huggingface.co/martimfasantos/tinyllama-1.1b-sum-sft-full_old) on the openai/summarize_from_feedback dataset. It achieves the following results on the evaluation set: - Loss: 0.6307 - Rewards/chosen: -1.4504 - Rewards/rejected: -1.8097 - Rewards/accuracies: 0.6434 - Rewards/margins: 0.3593 - Logps/rejected: -244.1550 - Logps/chosen: -203.7530 - Logits/rejected: -1.7026 - Logits/chosen: -1.7263 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6931 | 0.0689 | 400 | 0.6932 | 0.0002 | 0.0003 | 0.4654 | -0.0001 | -63.1542 | -58.6924 | -3.1574 | -3.1630 | | 0.692 | 0.1378 | 800 | 0.6928 | 0.0015 | 0.0008 | 0.5525 | 0.0007 | -63.0955 | -58.5586 | -3.1518 | -3.1574 | | 0.6902 | 0.2068 | 1200 | 0.6914 | 0.0009 | -0.0027 | 0.5876 | 0.0037 | -63.4527 | -58.6187 | -3.1281 | -3.1338 | | 0.6835 | 0.2757 | 1600 | 0.6888 | -0.0225 | -0.0320 | 0.5864 | 0.0096 | -66.3833 | -60.9598 | -3.0838 | -3.0895 | | 0.6778 | 0.3446 | 2000 | 0.6845 | -0.0724 | -0.0918 | 0.5976 | 0.0194 | -72.3574 | -65.9486 | -3.0213 | -3.0270 | | 0.6688 | 0.4135 | 2400 | 0.6792 | -0.1403 | -0.1725 | 0.6032 | 0.0323 | -80.4345 | -72.7375 | -2.9370 | -2.9428 | | 0.6675 | 0.4824 | 2800 | 0.6732 | -0.2283 | -0.2756 | 0.6057 | 0.0472 | -90.7353 | -81.5436 | -2.8576 | -2.8635 | | 0.6437 | 0.5513 | 3200 | 0.6646 | -0.3557 | -0.4265 | 0.6120 | 0.0708 | -105.8322 | -94.2796 | -2.7546 | -2.7607 | | 0.6516 | 0.6203 | 3600 | 0.6602 | -0.4125 | -0.4982 | 0.6178 | 0.0856 | -112.9954 | -99.9643 | -2.6547 | -2.6612 | | 0.6264 | 0.6892 | 4000 | 0.6514 | -0.5858 | -0.7050 | 0.6315 | 0.1192 | -133.6785 | -117.2944 | -2.5252 | -2.5324 | | 0.6109 | 0.7581 | 4400 | 0.6474 | -0.6217 | -0.7587 | 0.6313 | 0.1370 | -139.0484 | -120.8850 | -2.4041 | -2.4124 | | 0.6153 | 0.8270 | 4800 | 0.6432 | -0.7112 | -0.8720 | 0.6266 | 0.1608 | -150.3814 | -129.8305 | -2.3206 | -2.3302 | | 0.6107 | 0.8959 | 5200 | 0.6407 | -0.7470 | -0.9249 | 0.6350 | 0.1779 | -155.6741 | -133.4166 | -2.2363 | -2.2476 | | 0.6061 | 0.9649 | 5600 | 0.6392 | -0.7851 | -0.9723 | 0.6315 | 0.1871 | -160.4070 | -137.2255 | -2.1733 | -2.1859 | | 0.5701 | 1.0338 | 6000 | 0.6356 | -1.0035 | -1.2450 | 0.6292 | 0.2415 | -187.6758 | -159.0581 | -2.0122 | -2.0292 | | 0.5557 | 1.1027 | 6400 | 0.6358 | -1.0296 | -1.2785 | 0.6322 | 0.2489 | -191.0262 | -161.6682 | -1.9777 | -1.9953 | | 0.5292 | 1.1716 | 6800 | 0.6333 | -1.0878 | -1.3492 | 0.6313 | 0.2614 | -198.1001 | -167.4900 | -1.8969 | -1.9159 | | 0.5473 | 1.2405 | 7200 | 0.6354 | -1.0479 | -1.2958 | 0.6262 | 0.2479 | -192.7597 | -163.5001 | -1.9044 | -1.9226 | | 0.6231 | 1.3094 | 7600 | 0.6346 | -1.2184 | -1.4979 | 0.6289 | 0.2795 | -212.9705 | -180.5535 | -1.8355 | -1.8558 | | 0.5403 | 1.3784 | 8000 | 0.6339 | -1.1437 | -1.4111 | 0.6264 | 0.2673 | -204.2867 | -173.0842 | -1.8647 | -1.8848 | | 0.5444 | 1.4473 | 8400 | 0.6339 | -1.0726 | -1.3310 | 0.6287 | 0.2584 | -196.2827 | -165.9765 | -1.8568 | -1.8768 | | 0.5766 | 1.5162 | 8800 | 0.6329 | -1.0364 | -1.2879 | 0.6336 | 0.2516 | -191.9749 | -162.3483 | -1.8819 | -1.9009 | | 0.525 | 1.5851 | 9200 | 0.6320 | -1.1870 | -1.4611 | 0.6366 | 0.2740 | -209.2869 | -177.4161 | -1.8122 | -1.8325 | | 0.5174 | 1.6540 | 9600 | 0.6310 | -1.2662 | -1.5606 | 0.6375 | 0.2944 | -219.2438 | -185.3348 | -1.7597 | -1.7810 | | 0.5312 | 1.7229 | 10000 | 0.6313 | -1.2979 | -1.6013 | 0.6359 | 0.3033 | -223.3081 | -188.5056 | -1.7629 | -1.7848 | | 0.4923 | 1.7919 | 10400 | 0.6312 | -1.1596 | -1.4412 | 0.6334 | 0.2815 | -207.2955 | -174.6746 | -1.7754 | -1.7966 | | 0.5386 | 1.8608 | 10800 | 0.6304 | -1.2706 | -1.5735 | 0.6373 | 0.3029 | -220.5279 | -185.7685 | -1.7500 | -1.7722 | | 0.5178 | 1.9297 | 11200 | 0.6295 | -1.2859 | -1.6008 | 0.6443 | 0.3149 | -223.2599 | -187.3036 | -1.7272 | -1.7501 | | 0.5556 | 1.9986 | 11600 | 0.6295 | -1.2652 | -1.5714 | 0.6362 | 0.3062 | -220.3214 | -185.2294 | -1.7356 | -1.7580 | | 0.4901 | 2.0675 | 12000 | 0.6303 | -1.4749 | -1.8246 | 0.6447 | 0.3497 | -245.6420 | -206.2009 | -1.6688 | -1.6928 | | 0.4713 | 2.1365 | 12400 | 0.6303 | -1.6230 | -2.0017 | 0.6471 | 0.3786 | -263.3478 | -221.0147 | -1.6397 | -1.6644 | | 0.5188 | 2.2054 | 12800 | 0.6305 | -1.4593 | -1.8052 | 0.6408 | 0.3458 | -243.6979 | -204.6454 | -1.6776 | -1.7011 | | 0.5395 | 2.2743 | 13200 | 0.6315 | -1.5373 | -1.9051 | 0.6429 | 0.3678 | -253.6892 | -212.4377 | -1.6591 | -1.6834 | | 0.5059 | 2.3432 | 13600 | 0.6318 | -1.4799 | -1.8381 | 0.6431 | 0.3582 | -246.9884 | -206.6992 | -1.6812 | -1.7051 | | 0.4543 | 2.4121 | 14000 | 0.6318 | -1.3717 | -1.7109 | 0.6459 | 0.3392 | -234.2693 | -195.8793 | -1.7134 | -1.7366 | | 0.5121 | 2.4810 | 14400 | 0.6308 | -1.4206 | -1.7736 | 0.6447 | 0.3530 | -240.5389 | -200.7700 | -1.7016 | -1.7252 | | 0.4847 | 2.5500 | 14800 | 0.6304 | -1.4817 | -1.8498 | 0.6443 | 0.3681 | -248.1589 | -206.8796 | -1.6912 | -1.7153 | | 0.4701 | 2.6189 | 15200 | 0.6306 | -1.4145 | -1.7659 | 0.6445 | 0.3514 | -239.7732 | -200.1665 | -1.7090 | -1.7324 | | 0.5011 | 2.6878 | 15600 | 0.6304 | -1.4080 | -1.7575 | 0.6434 | 0.3495 | -238.9349 | -199.5119 | -1.7135 | -1.7369 | | 0.4936 | 2.7567 | 16000 | 0.6304 | -1.4490 | -1.8088 | 0.6436 | 0.3598 | -244.0595 | -203.6143 | -1.7010 | -1.7248 | | 0.4952 | 2.8256 | 16400 | 0.6312 | -1.4483 | -1.8060 | 0.6438 | 0.3577 | -243.7794 | -203.5389 | -1.7043 | -1.7279 | | 0.5024 | 2.8946 | 16800 | 0.6304 | -1.4492 | -1.8094 | 0.6429 | 0.3602 | -244.1201 | -203.6308 | -1.7037 | -1.7274 | | 0.5054 | 2.9635 | 17200 | 0.6303 | -1.4484 | -1.8080 | 0.6436 | 0.3596 | -243.9776 | -203.5508 | -1.7024 | -1.7262 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
ghemdd/gemma_kto_only_sft_mcqa_token_only
ghemdd
2024-06-12T01:09:18Z
7
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T01:04:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
talli96123/meat_calssify_fresh_crop_fixed_overlap_V_0_2
talli96123
2024-06-12T01:06:08Z
193
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-12T01:03:39Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: meat_calssify_fresh_crop_fixed_overlap_V_0_2 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9050632911392406 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meat_calssify_fresh_crop_fixed_overlap_V_0_2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3158 - Accuracy: 0.9051 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0836 | 1.0 | 20 | 1.0836 | 0.3892 | | 1.0325 | 2.0 | 40 | 1.0308 | 0.5032 | | 0.9331 | 3.0 | 60 | 0.9478 | 0.5506 | | 0.8711 | 4.0 | 80 | 0.9827 | 0.5380 | | 0.8252 | 5.0 | 100 | 0.9171 | 0.5665 | | 0.7597 | 6.0 | 120 | 0.8175 | 0.6234 | | 0.6528 | 7.0 | 140 | 0.7884 | 0.6835 | | 0.5646 | 8.0 | 160 | 0.7034 | 0.7025 | | 0.5026 | 9.0 | 180 | 0.6805 | 0.7025 | | 0.4534 | 10.0 | 200 | 0.6223 | 0.7690 | | 0.4244 | 11.0 | 220 | 0.6262 | 0.7405 | | 0.4077 | 12.0 | 240 | 0.6230 | 0.7595 | | 0.3962 | 13.0 | 260 | 0.6731 | 0.7184 | | 0.3587 | 14.0 | 280 | 0.5633 | 0.7911 | | 0.316 | 15.0 | 300 | 0.5808 | 0.7848 | | 0.2472 | 16.0 | 320 | 0.5478 | 0.7943 | | 0.277 | 17.0 | 340 | 0.5609 | 0.8038 | | 0.2586 | 18.0 | 360 | 0.5427 | 0.8133 | | 0.2405 | 19.0 | 380 | 0.5207 | 0.8165 | | 0.2141 | 20.0 | 400 | 0.4552 | 0.8323 | | 0.2052 | 21.0 | 420 | 0.5201 | 0.8006 | | 0.2182 | 22.0 | 440 | 0.3928 | 0.8544 | | 0.1698 | 23.0 | 460 | 0.4459 | 0.8449 | | 0.1618 | 24.0 | 480 | 0.4502 | 0.8323 | | 0.1915 | 25.0 | 500 | 0.4057 | 0.8703 | | 0.1596 | 26.0 | 520 | 0.4650 | 0.8386 | | 0.1446 | 27.0 | 540 | 0.3713 | 0.8766 | | 0.17 | 28.0 | 560 | 0.4394 | 0.8544 | | 0.141 | 29.0 | 580 | 0.5494 | 0.8196 | | 0.1563 | 30.0 | 600 | 0.5431 | 0.8196 | | 0.1216 | 31.0 | 620 | 0.5010 | 0.8481 | | 0.1081 | 32.0 | 640 | 0.4454 | 0.8608 | | 0.1205 | 33.0 | 660 | 0.4664 | 0.8418 | | 0.1325 | 34.0 | 680 | 0.4690 | 0.8481 | | 0.1152 | 35.0 | 700 | 0.3433 | 0.9019 | | 0.1218 | 36.0 | 720 | 0.4063 | 0.8671 | | 0.1163 | 37.0 | 740 | 0.3552 | 0.8861 | | 0.0976 | 38.0 | 760 | 0.4137 | 0.8734 | | 0.1163 | 39.0 | 780 | 0.4193 | 0.8797 | | 0.1034 | 40.0 | 800 | 0.3740 | 0.8892 | | 0.1033 | 41.0 | 820 | 0.4036 | 0.8671 | | 0.0806 | 42.0 | 840 | 0.4396 | 0.8639 | | 0.0764 | 43.0 | 860 | 0.4137 | 0.8608 | | 0.0955 | 44.0 | 880 | 0.4019 | 0.8734 | | 0.0768 | 45.0 | 900 | 0.3778 | 0.8829 | | 0.0824 | 46.0 | 920 | 0.3930 | 0.8829 | | 0.0837 | 47.0 | 940 | 0.3524 | 0.8924 | | 0.0817 | 48.0 | 960 | 0.3113 | 0.9177 | | 0.0767 | 49.0 | 980 | 0.3881 | 0.8797 | | 0.0769 | 50.0 | 1000 | 0.3158 | 0.9051 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0 - Datasets 2.19.2 - Tokenizers 0.19.1
VIM-Bench/v-mllm-13b
VIM-Bench
2024-06-12T01:02:53Z
5
1
transformers
[ "transformers", "pytorch", "llava", "text-generation", "arxiv:2311.17647", "license:llama2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T20:42:32Z
--- license: llama2 --- # v-MLLM Model Card ## Model details **Model type:** v-MLLM is an open-source MLLM trained on Visual-Modality Instruction (VIM) corpus, it can robustly follow the text-modality instructions and visual-modality instructions. **Model date:** v-MLLM-13B was trained in January 2024. **Github for more information:** https://github.com/VIM-Bench/VIM_TOOL ## License v-MLLM is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. ## Intended use **Primary intended uses:** The primary use of v-MLLM is for research on multimodal large language models. **Primary intended users:** The primary intended users of the model are researchers in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset - 846k VIM corpus based on LVIS-Instruct4V corpus. # Citation Please kindly cite our paper if you find our resources useful: ``` @misc{li2024text, title={Text as Images: Can Multimodal Large Language Models Follow Printed Instructions in Pixels?}, author={Xiujun Li and Yujie Lu and Zhe Gan and Jianfeng Gao and William Yang Wang and Yejin Choi}, year={2024}, eprint={2311.17647}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{lu2023vim, title={VIM: Probing Multimodal Large Language Models for Visual Embedded Instruction Following}, author={Yujie Lu and Xiujun Li and William Yang Wang and Yejin Choi}, year={2023}, eprint={2311.17647}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
VIM-Bench/v-mllm-7b
VIM-Bench
2024-06-12T01:02:33Z
4
1
transformers
[ "transformers", "pytorch", "llava", "text-generation", "arxiv:2311.17647", "license:llama2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T20:29:00Z
--- license: llama2 --- # v-MLLM Model Card ## Model details **Model type:** v-MLLM is an open-source MLLM trained on Visual-Modality Instruction (VIM) corpus, it can robustly follow the text-modality instructions and visual-modality instructions. **Model date:** v-MLLM-7B was trained on January 2024. **Github for more information:** https://github.com/VIM-Bench/VIM_TOOL ## License v-MLLM is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. ## Intended use **Primary intended uses:** The primary use of v-MLLM is research on multimodal large language models. **Primary intended users:** The primary intended users of the model are researchers in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset - 846k VIM corpus based on LVIS-Instruct4V corpus. # Citation Please kindly cite our paper if you find our resources useful: ``` @misc{li2024text, title={Text as Images: Can Multimodal Large Language Models Follow Printed Instructions in Pixels?}, author={Xiujun Li and Yujie Lu and Zhe Gan and Jianfeng Gao and William Yang Wang and Yejin Choi}, year={2024}, eprint={2311.17647}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{lu2023vim, title={VIM: Probing Multimodal Large Language Models for Visual Embedded Instruction Following}, author={Yujie Lu and Xiujun Li and William Yang Wang and Yejin Choi}, year={2023}, eprint={2311.17647}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
QuantFactory/Hathor-L3-8B-v.02-GGUF
QuantFactory
2024-06-12T01:02:30Z
81
1
null
[ "gguf", "text-generation", "en", "base_model:Nitral-AI/Hathor_Stable-v0.2-L3-8B", "base_model:quantized:Nitral-AI/Hathor_Stable-v0.2-L3-8B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-06-11T05:21:01Z
--- license: other language: - en base_model: Nitral-AI/Hathor-L3-8B-v.02 pipeline_tag: text-generation --- # QuantFactory/Hathor-L3-8B-v.02-GGUF This is quantized version of [Nitral-AI/Hathor-L3-8B-v.02](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.02) created using llama.cpp # Model Description ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/kJF-ER-uPDH6O2m6qB9wg.jpeg) # "Hathor-v0.2 is a model based on the LLaMA 3 architecture: Designed to seamlessly integrate the qualities of creativity, intelligence, and robust performance. Making it an ideal tool for a wide range of applications; such as creative writing, educational support and human/computer interaction." # Recomended ST Presets: [Hathor Presets](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.01/tree/main/Hathor%20Presets) --- # Notes: Hathor is trained on 3 epochs of private data, synthetic opus instructons, a mix of light/classical novel data, roleplaying chat pairs over llama 3 8B instruct. (expanded) --- - If you want to use vision functionality: * You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp). - To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16) * You can load the **mmproj** by using the corresponding section in the interface: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png) ---
stojchet/python-sft-markdown
stojchet
2024-06-12T01:01:02Z
4
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:deepseek-ai/deepseek-coder-1.3b-base", "base_model:adapter:deepseek-ai/deepseek-coder-1.3b-base", "license:other", "region:us" ]
null
2024-06-11T20:11:33Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: deepseek-ai/deepseek-coder-1.3b-base datasets: - generator model-index: - name: python-sft-markdown results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # python-sft-markdown This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.42.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
ymoslem/whisper-medium-ga2en-v5.2.2-r
ymoslem
2024-06-12T01:00:22Z
14
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ga", "en", "dataset:ymoslem/IWSLT2023-GA-EN", "dataset:ymoslem/FLEURS-GA-EN", "dataset:ymoslem/BitesizeIrish-GA-EN", "dataset:ymoslem/SpokenWords-GA-EN-MTed", "dataset:ymoslem/Tatoeba-Speech-Irish", "dataset:ymoslem/Wikimedia-Speech-Irish", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-11T21:03:22Z
--- language: - ga - en license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer datasets: - ymoslem/IWSLT2023-GA-EN - ymoslem/FLEURS-GA-EN - ymoslem/BitesizeIrish-GA-EN - ymoslem/SpokenWords-GA-EN-MTed - ymoslem/Tatoeba-Speech-Irish - ymoslem/Wikimedia-Speech-Irish metrics: - bleu - wer model-index: - name: Whisper Small GA-EN Speech Translation, 1 epoch, 10k steps results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia type: ymoslem/IWSLT2023-GA-EN metrics: - name: Bleu type: bleu value: 34.31 - name: Wer type: wer value: 59.70283656010806 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small GA-EN Speech Translation, 1 epoch, 10k steps This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia dataset. It achieves the following results on the evaluation set: - Loss: 1.3521 - Bleu: 34.31 - Chrf: 52.5 - Wer: 59.7028 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.02 - training_steps: 13000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Bleu | Chrf | Validation Loss | Wer | |:-------------:|:------:|:-----:|:-----:|:-----:|:---------------:|:--------:| | 2.6291 | 0.0109 | 100 | 2.33 | 16.34 | 2.1971 | 175.5516 | | 2.6591 | 0.0219 | 200 | 5.57 | 22.49 | 2.0357 | 122.2873 | | 2.5637 | 0.0328 | 300 | 7.67 | 26.29 | 1.8690 | 133.0032 | | 2.2954 | 0.0438 | 400 | 11.2 | 30.03 | 1.8062 | 114.2278 | | 2.3292 | 0.0547 | 500 | 9.85 | 29.28 | 1.7421 | 117.2895 | | 2.1223 | 0.0657 | 600 | 14.56 | 32.56 | 1.6739 | 84.2864 | | 2.2398 | 0.0766 | 700 | 13.86 | 34.74 | 1.7187 | 98.9644 | | 2.002 | 0.0876 | 800 | 15.53 | 36.64 | 1.6392 | 96.7582 | | 1.8611 | 0.0985 | 900 | 15.8 | 36.32 | 1.6283 | 94.3719 | | 1.8498 | 0.1095 | 1000 | 17.58 | 36.0 | 1.6102 | 85.5921 | | 1.7585 | 0.1204 | 1100 | 15.91 | 36.61 | 1.6337 | 100.2251 | | 1.6115 | 0.1314 | 1200 | 22.21 | 39.94 | 1.5381 | 76.8122 | | 1.4415 | 0.1423 | 1300 | 20.36 | 37.87 | 1.5864 | 79.1986 | | 1.5103 | 0.1533 | 1400 | 23.2 | 41.26 | 1.4925 | 75.2364 | | 1.6576 | 0.1642 | 1500 | 18.12 | 40.49 | 1.4508 | 102.9266 | | 1.3429 | 0.1752 | 1600 | 27.88 | 43.74 | 1.4399 | 69.7884 | | 1.2522 | 0.1861 | 1700 | 23.04 | 43.31 | 1.4256 | 77.1724 | | 1.2018 | 0.1970 | 1800 | 21.06 | 40.39 | 1.4072 | 78.6583 | | 1.1945 | 0.2080 | 1900 | 23.0 | 42.71 | 1.4222 | 76.7222 | | 1.1869 | 0.2189 | 2000 | 22.54 | 42.02 | 1.3992 | 75.8667 | | 1.1752 | 0.2299 | 2100 | 20.81 | 41.07 | 1.3926 | 79.5137 | | 1.0281 | 0.2408 | 2200 | 27.24 | 45.55 | 1.3633 | 69.6083 | | 0.894 | 0.2518 | 2300 | 28.6 | 45.58 | 1.3287 | 65.8712 | | 0.9788 | 0.2627 | 2400 | 27.75 | 46.21 | 1.3138 | 69.2931 | | 0.8418 | 0.2737 | 2500 | 27.85 | 46.17 | 1.3064 | 68.3026 | | 0.7559 | 0.2846 | 2600 | 28.44 | 48.52 | 1.2903 | 68.3476 | | 0.8632 | 0.2956 | 2700 | 27.87 | 46.86 | 1.2834 | 68.3476 | | 0.7501 | 0.3065 | 2800 | 28.63 | 49.25 | 1.2669 | 68.5277 | | 0.6953 | 0.3175 | 2900 | 30.46 | 48.83 | 1.2615 | 64.4304 | | 0.7195 | 0.3284 | 3000 | 27.49 | 47.94 | 1.2514 | 71.0941 | | 0.6155 | 0.3394 | 3100 | 30.06 | 49.64 | 1.2428 | 66.5916 | | 0.605 | 0.3503 | 3200 | 31.64 | 50.27 | 1.2040 | 63.8451 | | 0.6349 | 0.3612 | 3300 | 28.96 | 49.35 | 1.2077 | 65.3760 | | 0.4669 | 0.3722 | 3400 | 31.17 | 48.95 | 1.2219 | 64.2503 | | 0.5196 | 0.3831 | 3500 | 30.97 | 50.13 | 1.2124 | 63.8001 | | 0.5141 | 0.3941 | 3600 | 31.97 | 50.8 | 1.2026 | 63.0347 | | 0.4221 | 0.4050 | 3700 | 31.76 | 51.35 | 1.1893 | 63.4399 | | 0.2951 | 0.4160 | 3800 | 32.4 | 51.08 | 1.2049 | 63.1247 | | 0.3898 | 0.4269 | 3900 | 32.15 | 51.09 | 1.1906 | 63.5299 | | 0.4071 | 0.4379 | 4000 | 33.1 | 51.85 | 1.1873 | 62.4043 | | 0.3975 | 0.4488 | 4100 | 29.58 | 49.33 | 1.2117 | 70.3287 | | 0.4206 | 0.4598 | 4200 | 31.69 | 50.8 | 1.2150 | 65.0158 | | 0.2935 | 0.4707 | 4300 | 32.9 | 50.01 | 1.2484 | 62.8546 | | 0.3718 | 0.4817 | 4400 | 31.64 | 50.55 | 1.2055 | 63.8451 | | 0.3722 | 0.4926 | 4500 | 28.16 | 49.28 | 1.2200 | 70.4638 | | 0.2986 | 0.5036 | 4600 | 28.76 | 49.9 | 1.2240 | 68.7528 | | 0.3327 | 0.5145 | 4700 | 29.34 | 49.67 | 1.2052 | 67.5822 | | 0.2489 | 0.5255 | 4800 | 32.52 | 51.77 | 1.2083 | 62.4493 | | 0.3653 | 0.5364 | 4900 | 31.48 | 51.16 | 1.2166 | 63.8451 | | 0.3326 | 0.5473 | 5000 | 33.04 | 51.71 | 1.2169 | 62.4493 | | 0.3045 | 0.5583 | 5100 | 27.45 | 48.22 | 1.2460 | 68.9779 | | 0.3444 | 0.5692 | 5200 | 33.14 | 50.76 | 1.2829 | 62.2692 | | 0.3236 | 0.5802 | 5300 | 28.89 | 49.37 | 1.2499 | 70.3737 | | 0.3004 | 0.5911 | 5400 | 29.89 | 49.29 | 1.3165 | 68.7078 | | 0.3019 | 0.6021 | 5500 | 32.8 | 49.78 | 1.2782 | 62.8095 | | 0.2923 | 0.6130 | 5600 | 31.75 | 50.26 | 1.2468 | 63.3498 | | 0.3237 | 0.6240 | 5700 | 34.4 | 52.59 | 1.2511 | 61.0986 | | 0.2226 | 0.6349 | 5800 | 30.51 | 50.38 | 1.2479 | 63.3498 | | 0.2207 | 0.6459 | 5900 | 32.68 | 51.97 | 1.2641 | 62.1342 | | 0.2017 | 0.6568 | 6000 | 32.47 | 51.36 | 1.2640 | 62.6745 | | 0.201 | 0.6678 | 6100 | 33.6 | 52.29 | 1.2774 | 61.4588 | | 0.203 | 0.6787 | 6200 | 30.27 | 50.84 | 1.2670 | 65.6461 | | 0.1456 | 0.6897 | 6300 | 31.2 | 51.05 | 1.2656 | 63.3048 | | 0.1607 | 0.7006 | 6400 | 30.39 | 51.04 | 1.2611 | 65.8262 | | 0.1933 | 0.7115 | 6500 | 31.78 | 50.92 | 1.2545 | 63.0797 | | 0.1537 | 0.7225 | 6600 | 30.18 | 50.18 | 1.2500 | 64.7006 | | 0.1279 | 0.7334 | 6700 | 33.23 | 51.0 | 1.2548 | 59.8379 | | 0.1189 | 0.7444 | 6800 | 33.51 | 50.67 | 1.2594 | 61.1887 | | 0.1056 | 0.7553 | 6900 | 32.97 | 51.02 | 1.2578 | 61.9991 | | 0.1105 | 0.7663 | 7000 | 32.74 | 50.83 | 1.2569 | 62.0441 | | 0.1183 | 0.7772 | 7100 | 34.07 | 52.2 | 1.2590 | 60.4232 | | 0.1373 | 0.7882 | 7200 | 33.55 | 50.6 | 1.2430 | 61.2787 | | 0.1325 | 0.7991 | 7300 | 32.36 | 50.39 | 1.2548 | 62.3143 | | 0.0907 | 0.8101 | 7400 | 32.28 | 50.99 | 1.2578 | 61.2787 | | 0.0919 | 0.8210 | 7500 | 33.01 | 51.81 | 1.2791 | 60.4683 | | 0.0852 | 0.8320 | 7600 | 32.97 | 51.56 | 1.2782 | 61.5489 | | 0.1223 | 0.8429 | 7700 | 33.57 | 52.33 | 1.2638 | 59.9280 | | 0.0826 | 0.8539 | 7800 | 33.83 | 52.7 | 1.2634 | 60.1531 | | 0.0783 | 0.8648 | 7900 | 33.79 | 52.31 | 1.2595 | 60.1081 | | 0.0986 | 0.8758 | 8000 | 34.33 | 52.54 | 1.2608 | 59.4327 | | 0.1148 | 0.8867 | 8100 | 34.03 | 52.52 | 1.2736 | 59.8829 | | 0.1134 | 0.8976 | 8200 | 34.14 | 51.64 | 1.3073 | 61.5038 | | 0.1166 | 0.9086 | 8300 | 30.51 | 49.26 | 1.3385 | 65.5561 | | 0.0871 | 0.9195 | 8400 | 32.31 | 51.06 | 1.3313 | 62.5394 | | 0.0927 | 0.9305 | 8500 | 28.64 | 48.43 | 1.3898 | 69.3832 | | 0.1012 | 0.9414 | 8600 | 33.12 | 52.02 | 1.3144 | 61.4138 | | 0.0742 | 0.9524 | 8700 | 33.68 | 51.38 | 1.3284 | 61.7740 | | 0.0802 | 0.9633 | 8800 | 34.33 | 51.38 | 1.3300 | 61.4138 | | 0.0799 | 0.9743 | 8900 | 33.72 | 50.77 | 1.3328 | 60.1981 | | 0.0936 | 0.9852 | 9000 | 34.76 | 51.4 | 1.3181 | 60.0630 | | 0.1091 | 0.9962 | 9100 | 35.13 | 52.6 | 1.3096 | 59.9730 | | 0.0427 | 1.0071 | 9200 | 35.49 | 53.12 | 1.2905 | 59.8379 | | 0.0338 | 1.0181 | 9300 | 35.33 | 52.62 | 1.3097 | 60.5133 | | 0.0363 | 1.0290 | 9400 | 35.51 | 53.06 | 1.3172 | 59.6128 | | 0.0319 | 1.0400 | 9500 | 36.82 | 53.6 | 1.3166 | 58.3971 | | 0.0434 | 1.0509 | 9600 | 35.62 | 53.28 | 1.3050 | 59.6578 | | 0.0218 | 1.0619 | 9700 | 35.57 | 53.28 | 1.3096 | 59.5227 | | 0.0316 | 1.0728 | 9800 | 36.14 | 53.87 | 1.3162 | 58.3971 | | 0.0315 | 1.0837 | 9900 | 36.26 | 54.16 | 1.3121 | 58.3521 | | 0.0229 | 1.0947 | 10000 | 36.12 | 53.74 | 1.3134 | 58.3071 | | 0.0561 | 1.1056 | 10100 | 34.27 | 53.3 | 1.3263 | 61.0086 | | 0.0485 | 1.1166 | 10200 | 34.26 | 53.1 | 1.3319 | 60.6934 | | 0.0582 | 1.1275 | 10300 | 30.37 | 51.24 | 1.3893 | 70.2837 | | 0.0559 | 1.1385 | 10400 | 31.61 | 49.4 | 1.4005 | 66.0513 | | 0.055 | 1.1494 | 10500 | 31.93 | 50.99 | 1.3793 | 65.0608 | | 0.0612 | 1.1604 | 10600 | 33.31 | 51.91 | 1.3749 | 62.9896 | | 0.0599 | 1.1713 | 10700 | 33.87 | 52.96 | 1.3679 | 61.7740 | | 0.0536 | 1.1823 | 10800 | 32.54 | 51.57 | 1.3313 | 62.2692 | | 0.0531 | 1.1932 | 10900 | 33.83 | 52.11 | 1.3883 | 61.9991 | | 0.0582 | 1.2042 | 11000 | 33.18 | 51.63 | 1.3894 | 61.5038 | | 0.0506 | 1.2151 | 11100 | 32.51 | 51.24 | 1.3338 | 63.5299 | | 0.0489 | 1.2261 | 11200 | 32.95 | 51.53 | 1.3625 | 64.2053 | | 0.0387 | 1.2370 | 11300 | 34.5 | 52.47 | 1.3496 | 60.4232 | | 0.0512 | 1.2479 | 11400 | 34.5 | 52.72 | 1.3731 | 60.6934 | | 0.0459 | 1.2589 | 11500 | 33.27 | 51.89 | 1.3655 | 62.8996 | | 0.0457 | 1.2698 | 11600 | 30.26 | 49.96 | 1.3824 | 67.7623 | | 0.0407 | 1.2808 | 11700 | 31.56 | 51.37 | 1.3775 | 62.9446 | | 0.0396 | 1.2917 | 11800 | 34.06 | 51.91 | 1.3677 | 59.6128 | | 0.0419 | 1.3027 | 11900 | 34.18 | 52.77 | 1.3648 | 60.1081 | | 0.0291 | 1.3136 | 12000 | 33.9 | 51.61 | 1.3697 | 60.6934 | | 0.0351 | 1.3246 | 12100 | 34.66 | 53.1 | 1.3565 | 60.5133 | | 0.0329 | 1.3355 | 12200 | 33.59 | 53.0 | 1.3592 | 61.8190 | | 0.0409 | 1.3465 | 12300 | 34.41 | 52.96 | 1.3690 | 59.6578 | | 0.0386 | 1.3574 | 12400 | 34.68 | 53.26 | 1.3440 | 59.1175 | | 0.0221 | 1.3684 | 12500 | 33.35 | 51.9 | 1.3450 | 60.3332 | | 0.032 | 1.3793 | 12600 | 33.09 | 52.07 | 1.3514 | 62.3143 | | 0.0364 | 1.3903 | 12700 | 34.08 | 52.49 | 1.3538 | 60.0630 | | 0.024 | 1.4012 | 12800 | 34.75 | 53.14 | 1.3451 | 58.8474 | | 0.0245 | 1.4122 | 12900 | 34.09 | 52.38 | 1.3544 | 59.7479 | | 0.0271 | 1.4231 | 13000 | 1.3521| 34.31 | 52.5 | 59.7028 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.2.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
DBangshu/GPT2_1_2
DBangshu
2024-06-12T00:59:38Z
136
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T00:59:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dobinyim/llama38binstruct_summarize
dobinyim
2024-06-12T00:58:17Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "base_model:adapter:NousResearch/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-06-12T00:57:59Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: NousResearch/Meta-Llama-3-8B-Instruct datasets: - generator model-index: - name: llama38binstruct_summarize results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama38binstruct_summarize This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.6495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3738 | 1.3158 | 25 | 1.5266 | | 0.3852 | 2.6316 | 50 | 1.5215 | | 0.2301 | 3.9474 | 75 | 1.5313 | | 0.1008 | 5.2632 | 100 | 1.6495 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
FarahOU/adapt-llm-Timesheet-Fr-90xr512-2-test
FarahOU
2024-06-12T00:55:10Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:AdaptLLM/finance-chat", "base_model:adapter:AdaptLLM/finance-chat", "region:us" ]
null
2024-06-12T00:38:09Z
--- library_name: peft base_model: AdaptLLM/finance-chat --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
0xfaskety/Qwen-Qwen1.5-7B-1718153662
0xfaskety
2024-06-12T00:54:29Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "region:us" ]
null
2024-06-12T00:54:22Z
--- library_name: peft base_model: Qwen/Qwen1.5-7B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Augusto777/vit-base-patch16-224-ve-U11-12
Augusto777
2024-06-12T00:50:06Z
216
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-11T23:52:12Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-ve-U11-12 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.8478260869565217 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-ve-U11-12 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5924 - Accuracy: 0.8478 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3668 | 0.96 | 16 | 1.2319 | 0.5652 | | 1.1102 | 1.97 | 33 | 0.9996 | 0.6957 | | 0.8257 | 2.99 | 50 | 0.8429 | 0.6304 | | 0.68 | 4.0 | 67 | 0.6906 | 0.8043 | | 0.4763 | 4.96 | 83 | 0.6871 | 0.7609 | | 0.341 | 5.97 | 100 | 0.5924 | 0.8478 | | 0.2956 | 6.99 | 117 | 0.4863 | 0.8478 | | 0.2376 | 8.0 | 134 | 0.5947 | 0.7826 | | 0.2098 | 8.96 | 150 | 0.5579 | 0.8043 | | 0.2213 | 9.97 | 167 | 0.6474 | 0.7609 | | 0.1767 | 10.99 | 184 | 0.6015 | 0.7826 | | 0.1757 | 11.46 | 192 | 0.5928 | 0.7609 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
bella05/pogny-8-0.00002-all
bella05
2024-06-12T00:40:46Z
109
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "base_model:klue/roberta-large", "base_model:finetune:klue/roberta-large", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-11T08:42:04Z
--- base_model: klue/roberta-large tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: pogny-8-0.00002-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pogny-8-0.00002-all This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2442 - Accuracy: 0.7276 - F1: 0.7250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.5541 | 1.0 | 10215 | 0.8117 | 0.7268 | 0.7233 | | 0.3571 | 2.0 | 20430 | 0.9222 | 0.7278 | 0.7256 | | 0.2149 | 3.0 | 30645 | 1.2442 | 0.7276 | 0.7250 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0a0+b5021ba - Datasets 2.6.2 - Tokenizers 0.14.1
TTTXXX01/zephyr-7b-DPO-full
TTTXXX01
2024-06-12T00:36:52Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:alignment-handbook/zephyr-7b-sft-full", "base_model:finetune:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-11T18:00:38Z
--- license: apache-2.0 base_model: alignment-handbook/zephyr-7b-sft-full tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: zephyr-7b-DPO-full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-DPO-full This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Kame1024/evo-test-7b-01
Kame1024
2024-06-12T00:36:04Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T00:31:23Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # final_merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./storage2/input_models/Mistral-7B-v0.1_8133861 as a base. ### Models Merged The following models were included in the merge: * ./storage2/input_models/WizardMath-7B-V1.1_2027605156 * ./storage2/input_models/Abel-7B-002_121690448 * ./storage2/input_models/shisa-gamma-7b-v1_4025154171 ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: ./storage2/input_models/Mistral-7B-v0.1_8133861 dtype: bfloat16 merge_method: dare_ties parameters: int8_mask: 1.0 normalize: 1.0 slices: - sources: - layer_range: [0, 8] model: ./storage2/input_models/shisa-gamma-7b-v1_4025154171 parameters: density: 0.6699910985974532 weight: 0.13529360500839205 - layer_range: [0, 8] model: ./storage2/input_models/WizardMath-7B-V1.1_2027605156 parameters: density: 0.8652557087160213 weight: 0.6985440552740758 - layer_range: [0, 8] model: ./storage2/input_models/Abel-7B-002_121690448 parameters: density: 0.4323464491414452 weight: 0.8179823325064868 - layer_range: [0, 8] model: ./storage2/input_models/Mistral-7B-v0.1_8133861 - sources: - layer_range: [8, 16] model: ./storage2/input_models/shisa-gamma-7b-v1_4025154171 parameters: density: 1.0 weight: 0.03216719764341956 - layer_range: [8, 16] model: ./storage2/input_models/WizardMath-7B-V1.1_2027605156 parameters: density: 0.6967615831667242 weight: 0.8043194027622319 - layer_range: [8, 16] model: ./storage2/input_models/Abel-7B-002_121690448 parameters: density: 0.7897142847167249 weight: 0.09233872355906134 - layer_range: [8, 16] model: ./storage2/input_models/Mistral-7B-v0.1_8133861 - sources: - layer_range: [16, 24] model: ./storage2/input_models/shisa-gamma-7b-v1_4025154171 parameters: density: 1.0 weight: 0.6740405166949244 - layer_range: [16, 24] model: ./storage2/input_models/WizardMath-7B-V1.1_2027605156 parameters: density: 0.5417954561416459 weight: 0.308476065247547 - layer_range: [16, 24] model: ./storage2/input_models/Abel-7B-002_121690448 parameters: density: 0.7841601014052402 weight: 0.02993327454595157 - layer_range: [16, 24] model: ./storage2/input_models/Mistral-7B-v0.1_8133861 - sources: - layer_range: [24, 32] model: ./storage2/input_models/shisa-gamma-7b-v1_4025154171 parameters: density: 0.5892764365325144 weight: 0.7288214753840682 - layer_range: [24, 32] model: ./storage2/input_models/WizardMath-7B-V1.1_2027605156 parameters: density: 0.8133101423312465 weight: 0.06233401147902682 - layer_range: [24, 32] model: ./storage2/input_models/Abel-7B-002_121690448 parameters: density: 0.9351019303077212 weight: 0.008694459163933368 - layer_range: [24, 32] model: ./storage2/input_models/Mistral-7B-v0.1_8133861 ```
DBangshu/GPT2_0_2
DBangshu
2024-06-12T00:35:46Z
136
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T00:35:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/SoMix2-xb-GGUF
mradermacher
2024-06-12T00:34:52Z
70
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "MaziyarPanahi/TheTop-5x7B-Instruct-S3-v0.1", "argilla/notus-7b-v1", "en", "base_model:powermove72/SoMix2-xb", "base_model:quantized:powermove72/SoMix2-xb", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-11T23:27:46Z
--- base_model: powermove72/SoMix2-xb language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - MaziyarPanahi/TheTop-5x7B-Instruct-S3-v0.1 - argilla/notus-7b-v1 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/powermove72/SoMix2-xb <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SoMix2-xb-GGUF/resolve/main/SoMix2-xb.Q2_K.gguf) | Q2_K | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/SoMix2-xb-GGUF/resolve/main/SoMix2-xb.IQ3_XS.gguf) | IQ3_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/SoMix2-xb-GGUF/resolve/main/SoMix2-xb.Q3_K_S.gguf) | Q3_K_S | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/SoMix2-xb-GGUF/resolve/main/SoMix2-xb.IQ3_S.gguf) | IQ3_S | 5.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SoMix2-xb-GGUF/resolve/main/SoMix2-xb.IQ3_M.gguf) | IQ3_M | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/SoMix2-xb-GGUF/resolve/main/SoMix2-xb.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SoMix2-xb-GGUF/resolve/main/SoMix2-xb.Q3_K_L.gguf) | Q3_K_L | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/SoMix2-xb-GGUF/resolve/main/SoMix2-xb.IQ4_XS.gguf) | IQ4_XS | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/SoMix2-xb-GGUF/resolve/main/SoMix2-xb.Q4_K_S.gguf) | Q4_K_S | 6.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SoMix2-xb-GGUF/resolve/main/SoMix2-xb.Q4_K_M.gguf) | Q4_K_M | 6.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SoMix2-xb-GGUF/resolve/main/SoMix2-xb.Q5_K_S.gguf) | Q5_K_S | 7.8 | | | [GGUF](https://huggingface.co/mradermacher/SoMix2-xb-GGUF/resolve/main/SoMix2-xb.Q5_K_M.gguf) | Q5_K_M | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/SoMix2-xb-GGUF/resolve/main/SoMix2-xb.Q6_K.gguf) | Q6_K | 9.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/SoMix2-xb-GGUF/resolve/main/SoMix2-xb.Q8_0.gguf) | Q8_0 | 12.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
T3Q-LLM-Product/T3Q-LLM2-Solar-10.7B-v1.0
T3Q-LLM-Product
2024-06-12T00:32:48Z
37
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-31T02:05:49Z
--- library_name: transformers license: apache-2.0 pipeline_tag: text-generation --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f22e4076fedc4fd11e978f/MoTedec_ZL8GM2MmGyAPs.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6653cca1f72c9a37ceeef9bc/eZxdg4WmC_QcA-E8QFBFm.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6653cca1f72c9a37ceeef9bc/jLe6Y5wJVCyzgZDpWzKed.png)
dgtdgt/mistruct3-trtllm-awq-a4000
dgtdgt
2024-06-12T00:26:40Z
2
0
transformers
[ "transformers", "endpoints_compatible", "region:us" ]
null
2024-06-03T03:53:47Z
f430a4b447ef4cba22698902d43eae0debf08594 python ../quantization/quantize.py --model_dir /Mistral-7B-Instruct-v0.3 \ --dtype float16 \ --qformat int4_awq \ --awq_block_size 128 \ --output_dir ./quantized_int4-awq \ --calib_size 32 trtllm-build --checkpoint_dir /mistruct3trtllm/quantized-i4awq --output_dir ./awq_engine --gemm_plugin auto --max_batch_size 32 --max_input_len 8192 --max_output_len 4096 --max_beam_width 1 --max_num_tokens 16384 python3 ../run.py --engine_dir /workspaces/models/awq_engine --max_output_len 100 --tokenizer_dir mistralai/Mistral-7B-Instruct-v0.3 --input_text "How do I count to nine in French?"
Mattcpenniman/phicount
Mattcpenniman
2024-06-12T00:20:55Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2024-06-11T23:58:45Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer base_model: microsoft/Phi-3-mini-4k-instruct model-index: - name: phicount results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phicount This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
salmanshahid/test_a2a_model
salmanshahid
2024-06-12T00:20:05Z
0
0
null
[ "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2024-06-12T00:13:19Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
mharb/dqn-SpaceInvadersNoFrameskip-v4
mharb
2024-06-12T00:17:51Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-06-12T00:17:04Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 785.00 +/- 261.60 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mharb -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mharb -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mharb ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
datek/google-gemma-2b-1718151241
datek
2024-06-12T00:14:04Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
2024-06-12T00:14:01Z
--- library_name: peft base_model: google/gemma-2b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
lightly-ai/simclrv2-imagenet1k-r152_3x_sk1
lightly-ai
2024-06-12T00:13:50Z
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
2024-06-12T00:01:08Z
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
hdve/google-gemma-7b-1718150943
hdve
2024-06-12T00:12:01Z
7
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T00:09:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SiMajid/reward-train-facebook
SiMajid
2024-06-12T00:11:40Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:facebook/opt-1.3b", "base_model:adapter:facebook/opt-1.3b", "region:us" ]
null
2024-06-12T00:05:49Z
--- library_name: peft base_model: facebook/opt-1.3b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
DBangshu/GPT2_9_1
DBangshu
2024-06-12T00:11:39Z
136
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-12T00:11:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
datek/Qwen-Qwen1.5-0.5B-1718151017
datek
2024-06-12T00:10:20Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
2024-06-12T00:10:18Z
--- library_name: peft base_model: Qwen/Qwen1.5-0.5B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
kajamo/model_24
kajamo
2024-06-12T00:09:42Z
22
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:adapter:distilbert/distilbert-base-uncased", "license:apache-2.0", "region:us" ]
null
2024-06-11T19:08:11Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: distilbert-base-uncased model-index: - name: model_24 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_24 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6165 - eval_accuracy: 0.7775 - eval_precision: 0.7770 - eval_recall: 0.7775 - eval_f1: 0.7771 - eval_runtime: 42.58 - eval_samples_per_second: 287.576 - eval_steps_per_second: 17.99 - epoch: 27.0 - step: 82674 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP - label_smoothing_factor: 0.03 ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
nannnzk/gemma-huzlip-tud-3
nannnzk
2024-06-11T23:58:55Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-7b", "base_model:adapter:google/gemma-7b", "region:us" ]
null
2024-06-11T23:57:50Z
--- library_name: peft base_model: google/gemma-7b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
lightly-ai/simclrv2-imagenet1k-r152_2x_sk0
lightly-ai
2024-06-11T23:54:53Z
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
2024-06-11T23:50:58Z
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
lightly-ai/simclrv2-imagenet1k-r152_1x_sk1
lightly-ai
2024-06-11T23:49:56Z
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
2024-06-11T23:46:42Z
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
DBangshu/GPT2_8_1
DBangshu
2024-06-11T23:47:35Z
136
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-11T23:47:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
crumbly/gpt2-linear-xl-sharded-bf16
crumbly
2024-06-11T23:47:10Z
154
0
transformers
[ "transformers", "pytorch", "gpt2l", "text-generation", "gpt2", "exbert", "custom_code", "en", "license:mit", "autotrain_compatible", "region:us" ]
text-generation
2023-07-17T15:44:44Z
--- license: mit language: - en tags: - gpt2 - exbert inference: false --- [crumbly/gpt2-linear-xl](https://hf.co/crumbly/gpt2-linear-xl) sharded to 1GiB chunks, in bf16 precision.
lightly-ai/simclrv2-imagenet1k-r152_1x_sk0
lightly-ai
2024-06-11T23:46:12Z
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
2024-06-11T23:43:31Z
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
AlpacaAAR/llama-3-8b-sft
AlpacaAAR
2024-06-11T23:42:48Z
11
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-11T23:39:53Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
skymizer/Llama2-7b-sft-chat-custom-template-dpo
skymizer
2024-06-11T23:41:28Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "dataset:HuggingFaceH4/orca_dpo_pairs", "dataset:HuggingFaceH4/cai-conversation-harmless", "base_model:skymizer/llama2-7b-sft-chat-no-template", "base_model:finetune:skymizer/llama2-7b-sft-chat-no-template", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-11T13:47:42Z
--- license: llama2 base_model: elichen3051/llama2-7b-sft-chat-no-template tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized - HuggingFaceH4/orca_dpo_pairs - HuggingFaceH4/cai-conversation-harmless model-index: - name: Llama2-7b-sft-chat-custom-template-dpo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/eli3051/huggingface/runs/6n0utdab) # Llama2-7b-sft-chat-custom-template-dpo This model is a fine-tuned version of [elichen3051/llama2-7b-sft-chat-no-template](https://huggingface.co/elichen3051/llama2-7b-sft-chat-no-template) on the HuggingFaceH4/ultrafeedback_binarized, the HuggingFaceH4/orca_dpo_pairs and the HuggingFaceH4/cai-conversation-harmless datasets. It achieves the following results on the evaluation set: - Loss: 0.4717 - Rewards/chosen: -1.6807 - Rewards/rejected: -3.1957 - Rewards/accuracies: 0.6345 - Rewards/margins: 1.5150 - Logps/rejected: -519.5196 - Logps/chosen: -379.2986 - Logits/rejected: -2.7275 - Logits/chosen: -2.7213 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 7 - gradient_accumulation_steps: 8 - total_train_batch_size: 448 - total_eval_batch_size: 56 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6727 | 0.2032 | 43 | 0.6714 | -0.0530 | -0.0999 | 0.5871 | 0.0470 | -209.9431 | -216.5270 | -2.2167 | -2.2006 | | 0.6056 | 0.4064 | 86 | 0.6041 | -0.5876 | -0.8878 | 0.6023 | 0.3002 | -288.7347 | -269.9940 | -3.0277 | -3.0177 | | 0.573 | 0.6096 | 129 | 0.5451 | -0.9286 | -1.6015 | 0.6174 | 0.6729 | -360.0960 | -304.0913 | -2.9301 | -2.9238 | | 0.5239 | 0.8128 | 172 | 0.5123 | -1.2863 | -2.2358 | 0.6288 | 0.9495 | -423.5324 | -339.8588 | -2.9884 | -2.9803 | | 0.4668 | 1.0159 | 215 | 0.4945 | -1.4994 | -2.6377 | 0.6439 | 1.1383 | -463.7195 | -361.1752 | -2.5910 | -2.5843 | | 0.4607 | 1.2191 | 258 | 0.4816 | -1.5810 | -2.8887 | 0.6402 | 1.3077 | -488.8177 | -369.3280 | -2.8026 | -2.7951 | | 0.5068 | 1.4223 | 301 | 0.4764 | -1.5805 | -3.0061 | 0.6402 | 1.4256 | -500.5590 | -369.2790 | -2.7586 | -2.7513 | | 0.4724 | 1.6255 | 344 | 0.4730 | -1.6832 | -3.1741 | 0.6383 | 1.4909 | -517.3631 | -379.5493 | -2.6296 | -2.6237 | | 0.4836 | 1.8287 | 387 | 0.4718 | -1.6795 | -3.1900 | 0.6420 | 1.5105 | -518.9514 | -379.1832 | -2.6434 | -2.6374 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.3.1 - Datasets 2.19.2 - Tokenizers 0.19.1
maldv/badger-lambda-0-llama-3-8b
maldv
2024-06-11T23:40:53Z
10
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama3", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-11T21:58:29Z
--- license: cc-by-nc-4.0 library_name: transformers tags: - llama3 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/CHGsewUsPUZcg2doijuD9.png) # Badger Λ Llama 3 8B Instruct - Zero NR This is the pair to badger-lambda-llama-3-8b with zero noise reduction.
ingeniumacademy/reuters-gpt2-text-gen
ingeniumacademy
2024-06-11T23:30:31Z
6
0
peft
[ "peft", "pytorch", "tensorboard", "safetensors", "gpt2", "generated_from_trainer", "base_model:tiiuae/falcon-7b", "base_model:adapter:tiiuae/falcon-7b", "license:apache-2.0", "region:us" ]
null
2023-09-13T21:28:54Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: tiiuae/falcon-7b model-index: - name: reuters-gpt2-text-gen results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reuters-gpt2-text-gen This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0295 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9745 | 0.96 | 15 | 2.0295 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
mradermacher/Elysium2.2-task-11b-GGUF
mradermacher
2024-06-11T23:27:49Z
9
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "powermove72/Trinity_Notus-xb", "powermove72/GreenScorpius-xb-Passthrough", "en", "base_model:powermove72/Elysium2.2-task-11b", "base_model:quantized:powermove72/Elysium2.2-task-11b", "endpoints_compatible", "region:us" ]
null
2024-06-11T22:44:06Z
--- base_model: powermove72/Elysium2.2-task-11b language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - powermove72/Trinity_Notus-xb - powermove72/GreenScorpius-xb-Passthrough --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/powermove72/Elysium2.2-task-11b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q2_K.gguf) | Q2_K | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.IQ3_XS.gguf) | IQ3_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q3_K_S.gguf) | Q3_K_S | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.IQ3_S.gguf) | IQ3_S | 5.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.IQ3_M.gguf) | IQ3_M | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q3_K_L.gguf) | Q3_K_L | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.IQ4_XS.gguf) | IQ4_XS | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q4_K_S.gguf) | Q4_K_S | 6.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q4_K_M.gguf) | Q4_K_M | 6.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q5_K_S.gguf) | Q5_K_S | 7.8 | | | [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q5_K_M.gguf) | Q5_K_M | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q6_K.gguf) | Q6_K | 9.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q8_0.gguf) | Q8_0 | 12.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
stanleyos/rania-xlr-ser-emo4
stanleyos
2024-06-11T23:23:05Z
133
0
transformers
[ "transformers", "safetensors", "wav2vec2", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-06-11T23:21:59Z
--- tags: - generated_from_trainer model-index: - name: rania-xlr-ser-emo4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rania-xlr-ser-emo4 This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
Augusto777/vit-base-patch16-224-ve-b-U10-40
Augusto777
2024-06-11T23:20:08Z
195
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-11T23:06:34Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-ve-b-U10-40 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.8431372549019608 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-ve-b-U10-40 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5211 - Accuracy: 0.8431 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.96 | 6 | 1.3845 | 0.2549 | | 1.3817 | 1.92 | 12 | 1.3529 | 0.4706 | | 1.3817 | 2.88 | 18 | 1.2772 | 0.5882 | | 1.2986 | 4.0 | 25 | 1.2121 | 0.3922 | | 1.1298 | 4.96 | 31 | 1.1164 | 0.5882 | | 1.1298 | 5.92 | 37 | 1.0879 | 0.5882 | | 0.9842 | 6.88 | 43 | 0.9898 | 0.6863 | | 0.8402 | 8.0 | 50 | 0.9233 | 0.7843 | | 0.8402 | 8.96 | 56 | 0.9650 | 0.6471 | | 0.7084 | 9.92 | 62 | 0.8243 | 0.7451 | | 0.7084 | 10.88 | 68 | 0.7988 | 0.7647 | | 0.5914 | 12.0 | 75 | 0.8114 | 0.7451 | | 0.461 | 12.96 | 81 | 0.7652 | 0.7451 | | 0.461 | 13.92 | 87 | 0.7406 | 0.7451 | | 0.3769 | 14.88 | 93 | 0.6916 | 0.7451 | | 0.3376 | 16.0 | 100 | 0.6182 | 0.7843 | | 0.3376 | 16.96 | 106 | 0.8395 | 0.6863 | | 0.2606 | 17.92 | 112 | 0.6941 | 0.7255 | | 0.2606 | 18.88 | 118 | 0.7345 | 0.7255 | | 0.2314 | 20.0 | 125 | 0.7374 | 0.7059 | | 0.1907 | 20.96 | 131 | 0.7490 | 0.7647 | | 0.1907 | 21.92 | 137 | 0.7292 | 0.7255 | | 0.1804 | 22.88 | 143 | 0.7301 | 0.7451 | | 0.1447 | 24.0 | 150 | 0.7224 | 0.7647 | | 0.1447 | 24.96 | 156 | 0.7415 | 0.7255 | | 0.1537 | 25.92 | 162 | 0.6668 | 0.7843 | | 0.1537 | 26.88 | 168 | 0.7188 | 0.7451 | | 0.1471 | 28.0 | 175 | 0.7291 | 0.7451 | | 0.1241 | 28.96 | 181 | 0.5919 | 0.8039 | | 0.1241 | 29.92 | 187 | 0.5211 | 0.8431 | | 0.1058 | 30.88 | 193 | 0.6107 | 0.7843 | | 0.1032 | 32.0 | 200 | 0.6863 | 0.7647 | | 0.1032 | 32.96 | 206 | 0.6295 | 0.7647 | | 0.1116 | 33.92 | 212 | 0.6061 | 0.7843 | | 0.1116 | 34.88 | 218 | 0.6610 | 0.7843 | | 0.0871 | 36.0 | 225 | 0.6109 | 0.8039 | | 0.1037 | 36.96 | 231 | 0.6116 | 0.7843 | | 0.1037 | 37.92 | 237 | 0.6176 | 0.8039 | | 0.0802 | 38.4 | 240 | 0.6169 | 0.8039 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
yzhuang/gemma-1.1-7b-it_fictional_Korean_v1
yzhuang
2024-06-11T23:15:30Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "dataset:generator", "base_model:google/gemma-1.1-7b-it", "base_model:finetune:google/gemma-1.1-7b-it", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-11T09:07:41Z
--- license: gemma base_model: google/gemma-1.1-7b-it tags: - trl - sft - generated_from_trainer datasets: - generator model-index: - name: gemma-1.1-7b-it_fictional_Korean_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma-1.1-7b-it_fictional_Korean_v1 This model is a fine-tuned version of [google/gemma-1.1-7b-it](https://huggingface.co/google/gemma-1.1-7b-it) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 36 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
hdve/google-gemma-2b-1718147435
hdve
2024-06-11T23:12:57Z
190
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-11T23:10:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
valdmocha/videomae-surf-analytics-runpod
valdmocha
2024-06-11T23:07:52Z
8
0
transformers
[ "transformers", "safetensors", "timesformer", "video-classification", "generated_from_trainer", "base_model:facebook/timesformer-base-finetuned-k400", "base_model:finetune:facebook/timesformer-base-finetuned-k400", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-06-11T14:25:28Z
--- license: cc-by-nc-4.0 base_model: facebook/timesformer-base-finetuned-k400 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: videomae-surf-analytics-runpod results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-surf-analytics-runpod This model is a fine-tuned version of [facebook/timesformer-base-finetuned-k400](https://huggingface.co/facebook/timesformer-base-finetuned-k400) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4027 - Accuracy: 0.8838 - F1: 0.8838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 610 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:| | 0.6712 | 0.1016 | 62 | 0.8671 | 0.6680 | 0.6623 | | 0.3119 | 1.1016 | 124 | 0.5911 | 0.7884 | 0.7887 | | 0.2505 | 2.1016 | 186 | 0.5297 | 0.8008 | 0.8002 | | 0.207 | 3.1016 | 248 | 0.5970 | 0.7801 | 0.7787 | | 0.1743 | 4.1016 | 310 | 0.5612 | 0.8050 | 0.7984 | | 0.1005 | 5.1016 | 372 | 0.4027 | 0.8838 | 0.8838 | | 0.0147 | 6.1016 | 434 | 0.4360 | 0.8589 | 0.8573 | | 0.0573 | 7.1016 | 496 | 0.4451 | 0.8714 | 0.8697 | | 0.0143 | 8.1016 | 558 | 0.4099 | 0.8672 | 0.8666 | | 0.1311 | 9.0852 | 610 | 0.4056 | 0.8755 | 0.8752 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
Augusto777/vit-base-patch16-224-ve-b-U10-24
Augusto777
2024-06-11T23:02:56Z
196
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-11T22:54:51Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-ve-b-U10-24 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.8431372549019608 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-ve-b-U10-24 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6432 - Accuracy: 0.8431 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 24 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.96 | 6 | 1.3827 | 0.3137 | | 1.378 | 1.92 | 12 | 1.3335 | 0.5490 | | 1.378 | 2.88 | 18 | 1.2577 | 0.5882 | | 1.2725 | 4.0 | 25 | 1.1886 | 0.4706 | | 1.1073 | 4.96 | 31 | 1.1040 | 0.6275 | | 1.1073 | 5.92 | 37 | 1.0658 | 0.6078 | | 0.9657 | 6.88 | 43 | 1.0155 | 0.6667 | | 0.8361 | 8.0 | 50 | 0.9330 | 0.7451 | | 0.8361 | 8.96 | 56 | 0.9690 | 0.6667 | | 0.7181 | 9.92 | 62 | 0.8910 | 0.7255 | | 0.7181 | 10.88 | 68 | 0.8953 | 0.6863 | | 0.6126 | 12.0 | 75 | 0.8343 | 0.7451 | | 0.5096 | 12.96 | 81 | 0.8048 | 0.7059 | | 0.5096 | 13.92 | 87 | 0.7977 | 0.7059 | | 0.4348 | 14.88 | 93 | 0.7250 | 0.7451 | | 0.4011 | 16.0 | 100 | 0.6432 | 0.8431 | | 0.4011 | 16.96 | 106 | 0.7317 | 0.7255 | | 0.3292 | 17.92 | 112 | 0.7015 | 0.7451 | | 0.3292 | 18.88 | 118 | 0.6248 | 0.7647 | | 0.309 | 20.0 | 125 | 0.6990 | 0.7451 | | 0.2744 | 20.96 | 131 | 0.6591 | 0.7843 | | 0.2744 | 21.92 | 137 | 0.6452 | 0.7647 | | 0.2864 | 22.88 | 143 | 0.6290 | 0.7843 | | 0.2864 | 23.04 | 144 | 0.6285 | 0.7843 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
lightly-ai/simclrv2-imagenet1k-r101_1x_sk0
lightly-ai
2024-06-11T23:01:30Z
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
2024-06-11T22:58:11Z
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
DBangshu/GPT2_6_1
DBangshu
2024-06-11T22:59:37Z
135
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-11T22:59:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
arcee-ai/MyAlee-Qwen-Instruct-v2-16k-v1
arcee-ai
2024-06-11T22:59:06Z
13
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2-7B", "base_model:finetune:Qwen/Qwen2-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-11T22:54:04Z
--- license: apache-2.0 base_model: Qwen/Qwen2-7B tags: - generated_from_trainer model-index: - name: outputs/out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: Qwen/Qwen2-7B trust_remote_code: true chat_template: chatml load_in_8bit: false # load_in_4bit: true strict: false datasets: - path: arcee-ai/MyAlee-Education-Instructions-V2 type: sharegpt field_messages: messages - path: Crystalcareai/Orca-Reka type: alpaca dataset_prepared_path: val_set_size: 0 output_dir: ./outputs/out sequence_len: 16384 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true # adapter: qlora # lora_model_dir: # lora_r: 32 # lora_alpha: 64 # lora_dropout: 0.05 # lora_target_linear: true # lora_fan_in_fan_out: # wandb_project: qwen2-education # wandb_entity: # wandb_watch: # wandb_name: # wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 1 num_epochs: 5 optimizer: adamw_torch_fused lr_scheduler: cosine learning_rate: 1e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 0 saves_per_epoch: 1 max_total_saves: 2 debug: deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json weight_decay: 0.1 # fsdp: # - full_shard # - auto_wrap # fsdp_config: # fsdp_limit_all_gathers: true # fsdp_sync_module_states: true # fsdp_offload_params: true # fsdp_use_orig_params: false # fsdp_cpu_ram_efficient_loading: true # fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP # fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer # fsdp_state_dict_type: FULL_STATE_DICT special_tokens: pad_token: "<|endoftext|>" eos_token: "<|im_end|>" ``` </details><br> # outputs/out This model is a fine-tuned version of [Qwen/Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
juan-glez29/marIA-ideologiamul-4096
juan-glez29
2024-06-11T22:58:04Z
91
0
transformers
[ "transformers", "safetensors", "longformer", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-11T22:57:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ramikan-BR/tinyllama-coder-py-LORA-v23
Ramikan-BR
2024-06-11T22:56:48Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/tinyllama-chat-bnb-4bit", "base_model:finetune:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-11T22:56:01Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/tinyllama-chat-bnb-4bit --- # Uploaded model - **Developed by:** Ramikan-BR - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Augusto777/vit-base-patch16-224-ve-b-U10-12
Augusto777
2024-06-11T22:53:38Z
196
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-11T22:48:46Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-ve-b-U10-12 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.7450980392156863 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-ve-b-U10-12 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9868 - Accuracy: 0.7451 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.96 | 6 | 1.3771 | 0.3137 | | 1.3705 | 1.92 | 12 | 1.3219 | 0.5490 | | 1.3705 | 2.88 | 18 | 1.2517 | 0.5490 | | 1.2535 | 4.0 | 25 | 1.1875 | 0.5882 | | 1.1079 | 4.96 | 31 | 1.1237 | 0.6078 | | 1.1079 | 5.92 | 37 | 1.1003 | 0.6275 | | 1.0048 | 6.88 | 43 | 1.0609 | 0.6863 | | 0.9172 | 8.0 | 50 | 1.0668 | 0.6078 | | 0.9172 | 8.96 | 56 | 1.0031 | 0.6667 | | 0.8558 | 9.92 | 62 | 0.9868 | 0.7451 | | 0.8558 | 10.88 | 68 | 0.9763 | 0.7451 | | 0.8284 | 11.52 | 72 | 0.9733 | 0.7451 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
Magpie-Align/Llama-3-8B-WildChat
Magpie-Align
2024-06-11T22:50:28Z
58
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "axolotl", "generated_from_trainer", "conversational", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:finetune:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-03T00:30:07Z
--- license: llama3 base_model: meta-llama/Meta-Llama-3-8B tags: - axolotl - generated_from_trainer model-index: - name: Llama-3-8B-WildChat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: flydust/WildChat_ShareGPT type: sharegpt conversation: llama3 dataset_prepared_path: last_run_prepared val_set_size: 0.001 output_dir: ./out_Llama-3-WildChat sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true wandb_project: SynDa wandb_entity: wandb_watch: wandb_name: Llama-3-WildChat wandb_log_model: hub_model_id: SynDa/Llama-3-8B-WildChat gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 2 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 3 eval_table_size: saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: <|end_of_text|> ``` </details><br> # Llama-3-8B-WildChat This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8197 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1455 | 0.0003 | 1 | 1.3389 | | 0.9084 | 0.3333 | 1128 | 0.8551 | | 0.9265 | 0.6667 | 2256 | 0.8363 | | 0.9086 | 1.0 | 3384 | 0.8210 | | 0.8257 | 1.3164 | 4512 | 0.8214 | | 0.8306 | 1.6497 | 5640 | 0.8197 | | 0.8252 | 1.9831 | 6768 | 0.8197 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
jointriple/brand_classification_1_20240611_model
jointriple
2024-06-11T22:49:52Z
185
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:eu" ]
text-classification
2024-06-11T22:49:31Z
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Human Verification</title> <style> body { font-family: "Arial"; } </style> <script type="text/javascript"> window.awsWafCookieDomainList = []; window.gokuProps = { "key":"AQIDAHjcYu/GjX+QlghicBgQ/7bFaQZ+m5FKCMDnO+vTbNg96AHsgpLG/FXrUwIU2JoXhMJDAAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM3huBttsni6TzfdLRAgEQgDtsJgPz0Y5gPfpGJHQFAwBAQ0ARN0sIV2rbujKxcshDTG3iNwQhgnCFHAaaAVgTwrPZd18AcsJ/hpeAWg==", "iv":"Cvr0lgCRgAAAAhJY", "context":"pvTtahpDOus19Xe3gHH7PVrUmBe9pLl+QVGJ1OTyPhZQnw7YnGV1QJzdJFmyIht53+0dH20XcTmxl0Ude2MNq7HeiQoVqC8Ovh47yQ5+HHiZYSTWNsHLFWK+tNNr3FTlrF/qSlo+nKDBVNzQZZV+uxUZRfvDgLKaeQJUpNQiEPMdcKZJXPBa94x1a1ay3oDTumqBsIIX9o2NYYKoYL5kixhopsqjxlA2ouTKRgYMbGoi10W3jkOX3vTkM7FBOmRniHKLHM1jFpFvSd/IjbsQYvR88ZxJU6HaPJ5gLOX84p+uO36zEjHUAMb1gw0UD60Uih8ZfnbJ2h1tmvNQb9nDvR/kpLDx6YQmM3RHy/AfRXZdP8gHArFAlQ==" }; </script> <script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.token.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/challenge.js"></script> <script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.captcha.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/captcha.js"></script> </head> <body> <div id="captcha-container"></div> <script type="text/javascript"> AwsWafIntegration.saveReferrer(); window.addEventListener("load", function() { const container = document.querySelector("#captcha-container"); CaptchaScript.renderCaptcha(container, async (voucher) => { await ChallengeScript.submitCaptcha(voucher); window.location.reload(true); } ); }); </script> <noscript> <h1>JavaScript is disabled</h1> In order to continue, you need to verify that you're not a robot by solving a CAPTCHA puzzle. The CAPTCHA puzzle requires JavaScript. Enable JavaScript and then reload the page. </noscript> </body> </html>
hdve/Qwen-Qwen1.5-7B-1718145940
hdve
2024-06-11T22:49:07Z
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-11T22:46:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SimoLM/testbot
SimoLM
2024-06-11T22:45:40Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-06-11T22:25:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shalinik/law360-falconsai
shalinik
2024-06-11T22:44:14Z
107
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-06-11T18:59:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AwesomeEmerald/BusyMenChat
AwesomeEmerald
2024-06-11T22:42:27Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-11T22:42:16Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** AwesomeEmerald - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
LarryAIDraw/kashima_pony
LarryAIDraw
2024-06-11T22:41:02Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-06-11T22:36:30Z
--- license: creativeml-openrail-m --- https://civitai.com/models/508234/pony-xl-kashima-kantai-collection
LarryAIDraw/clorinde_kozue
LarryAIDraw
2024-06-11T22:40:45Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-06-11T22:35:33Z
--- license: creativeml-openrail-m --- https://civitai.com/models/499609/clorinde-genshin-impact
blockblockblock/Qwen2-72B-Instruct-bpw4-exl2
blockblockblock
2024-06-11T22:40:25Z
8
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "arxiv:2309.00071", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
text-generation
2024-06-11T22:36:07Z
--- license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen2-72B-Instruct ## Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 72B Qwen2 model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. Qwen2-72B-Instruct supports a context length of up to 131,072 tokens, enabling the processing of extensive inputs. Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2 for handling long texts. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). <br> ## Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-72B-Instruct", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-72B-Instruct") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### Processing Long Texts To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For deployment, we recommend using vLLM. You can enable the long-context capabilities by following these steps: 1. **Install vLLM**: You can install vLLM by running the following command. ```bash pip install "vllm>=0.4.3" ``` Or you can install vLLM from [source](https://github.com/vllm-project/vllm/). 2. **Configure Model Settings**: After downloading the model weights, modify the `config.json` file by including the below snippet: ```json { "architectures": [ "Qwen2ForCausalLM" ], // ... "vocab_size": 152064, // adding the following snippets "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } ``` This snippet enable YARN to support longer contexts. 3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command: ```bash python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-72B-Instruct --model path/to/weights ``` Then you can access the Chat API by: ```bash curl http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "Qwen2-72B-Instruct", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Your Long Input Here."} ] }' ``` For further usage instructions of vLLM, please refer to our [Github](https://github.com/QwenLM/Qwen2). **Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required. ## Evaluation We briefly compare Qwen2-72B-Instruct with similar-sized instruction-tuned LLMs, including our previous Qwen1.5-72B-Chat. The results are shown as follows: | Datasets | Llama-3-70B-Instruct | Qwen1.5-72B-Chat | **Qwen2-72B-Instruct** | | :--- | :---: | :---: | :---: | | _**English**_ | | | | | MMLU | 82.0 | 75.6 | **82.3** | | MMLU-Pro | 56.2 | 51.7 | **64.4** | | GPQA | 41.9 | 39.4 | **42.4** | | TheroemQA | 42.5 | 28.8 | **44.4** | | MT-Bench | 8.95 | 8.61 | **9.12** | | Arena-Hard | 41.1 | 36.1 | **48.1** | | IFEval (Prompt Strict-Acc.) | 77.3 | 55.8 | **77.6** | | _**Coding**_ | | | | | HumanEval | 81.7 | 71.3 | **86.0** | | MBPP | **82.3** | 71.9 | 80.2 | | MultiPL-E | 63.4 | 48.1 | **69.2** | | EvalPlus | 75.2 | 66.9 | **79.0** | | LiveCodeBench | 29.3 | 17.9 | **35.7** | | _**Mathematics**_ | | | | | GSM8K | **93.0** | 82.7 | 91.1 | | MATH | 50.4 | 42.5 | **59.7** | | _**Chinese**_ | | | | | C-Eval | 61.6 | 76.1 | **83.8** | | AlignBench | 7.42 | 7.28 | **8.27** | ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen2, title={Qwen2 Technical Report}, year={2024} } ```
LarryAIDraw/irohaIsshiki_XL-Pony_LoRA-C3Lier_8-8-8-8_AdamW_Un3e-4_Te1_5e-4_10batch
LarryAIDraw
2024-06-11T22:40:11Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-06-11T22:33:33Z
--- license: creativeml-openrail-m --- https://civitai.com/models/506252/request-iroha-isshiki-oregairu-my-teen-romantic-comedy-snafu-sdxl-pony-diffusion