modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-22 06:27:16
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
492 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-22 06:26:41
card
stringlengths
11
1.01M
lenatr99/loha_fine_tuned_rte_croslo
lenatr99
2024-05-23T20:21:26Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:EMBEDDIA/crosloengual-bert", "base_model:adapter:EMBEDDIA/crosloengual-bert", "license:cc-by-4.0", "region:us" ]
null
2024-05-23T20:21:21Z
--- license: cc-by-4.0 library_name: peft tags: - generated_from_trainer base_model: EMBEDDIA/crosloengual-bert metrics: - accuracy - f1 model-index: - name: loha_fine_tuned_rte_croslo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # loha_fine_tuned_rte_croslo This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6884 - Accuracy: 0.5862 - F1: 0.5862 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | 0.7171 | 1.7241 | 50 | 0.6838 | 0.5862 | 0.5483 | | 0.6991 | 3.4483 | 100 | 0.6843 | 0.6552 | 0.6388 | | 0.7194 | 5.1724 | 150 | 0.6864 | 0.5862 | 0.5789 | | 0.7001 | 6.8966 | 200 | 0.6882 | 0.5862 | 0.5862 | | 0.7079 | 8.6207 | 250 | 0.6884 | 0.5862 | 0.5862 | | 0.7095 | 10.3448 | 300 | 0.6889 | 0.5862 | 0.5862 | | 0.705 | 12.0690 | 350 | 0.6884 | 0.5862 | 0.5862 | | 0.6978 | 13.7931 | 400 | 0.6884 | 0.5862 | 0.5862 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
lenatr99/fine_tuned_rte_croslo
lenatr99
2024-05-23T20:19:14Z
111
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:EMBEDDIA/crosloengual-bert", "base_model:finetune:EMBEDDIA/crosloengual-bert", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-23T20:18:37Z
--- license: cc-by-4.0 base_model: EMBEDDIA/crosloengual-bert tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: fine_tuned_rte_croslo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine_tuned_rte_croslo This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7790 - Accuracy: 0.6207 - F1: 0.5951 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | 0.6951 | 1.7241 | 50 | 0.6869 | 0.5517 | 0.5549 | | 0.5952 | 3.4483 | 100 | 0.6381 | 0.6207 | 0.5466 | | 0.4725 | 5.1724 | 150 | 0.6293 | 0.6207 | 0.6090 | | 0.3055 | 6.8966 | 200 | 0.6905 | 0.6552 | 0.6018 | | 0.2004 | 8.6207 | 250 | 0.6624 | 0.6897 | 0.6523 | | 0.1191 | 10.3448 | 300 | 0.7124 | 0.6552 | 0.6236 | | 0.0661 | 12.0690 | 350 | 0.7694 | 0.6552 | 0.6236 | | 0.048 | 13.7931 | 400 | 0.7790 | 0.6207 | 0.5951 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
ManjuRangam/food_classifier
ManjuRangam
2024-05-23T20:19:08Z
64
0
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-23T19:47:08Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_keras_callback model-index: - name: ManjuRangam/food_classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ManjuRangam/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3737 - Validation Loss: 0.3714 - Train Accuracy: 0.912 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.7536 | 1.5773 | 0.86 | 0 | | 1.1748 | 0.8043 | 0.894 | 1 | | 0.6680 | 0.5410 | 0.895 | 2 | | 0.4813 | 0.4248 | 0.898 | 3 | | 0.3737 | 0.3714 | 0.912 | 4 | ### Framework versions - Transformers 4.41.0 - TensorFlow 2.15.0 - Datasets 2.19.1 - Tokenizers 0.19.1
bartowski/aya-23-8B-exl2
bartowski
2024-05-23T20:09:25Z
1
1
transformers
[ "transformers", "text-generation", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "el", "fa", "pl", "id", "cs", "he", "hi", "nl", "ro", "ru", "tr", "uk", "vi", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T20:09:24Z
--- library_name: transformers language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi license: cc-by-nc-4.0 quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of aya-23-8B Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.21">turboderp's ExLlamaV2 v0.0.21</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/CohereForAI/aya-23-8B ## Prompt format ``` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>{system_prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/aya-23-8B-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/aya-23-8B-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/aya-23-8B-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/aya-23-8B-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/aya-23-8B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/aya-23-8B-exl2 aya-23-8B-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/aya-23-8B-exl2 --revision 6_5 --local-dir aya-23-8B-exl2-6_5 ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/aya-23-8B-exl2 --revision 6_5 --local-dir aya-23-8B-exl2-6.5 ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
mrovejaxd/ABL_trad_g
mrovejaxd
2024-05-23T20:07:44Z
11
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:dccuchile/bert-base-spanish-wwm-cased", "base_model:finetune:dccuchile/bert-base-spanish-wwm-cased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-22T22:19:21Z
--- base_model: dccuchile/bert-base-spanish-wwm-cased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: ABL_trad_g results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ABL_trad_g This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8036 - Accuracy: 0.685 - F1: 0.6817 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 1.0352 | 1.0 | 1000 | 0.9705 | 0.5575 | 0.5434 | | 0.8786 | 2.0 | 2000 | 0.8527 | 0.6 | 0.5990 | | 0.7943 | 3.0 | 3000 | 0.8156 | 0.6125 | 0.6097 | | 0.7544 | 4.0 | 4000 | 0.7883 | 0.6508 | 0.6490 | | 0.713 | 5.0 | 5000 | 0.7780 | 0.6492 | 0.6455 | | 0.6854 | 6.0 | 6000 | 0.7630 | 0.675 | 0.6728 | | 0.644 | 7.0 | 7000 | 0.7692 | 0.6733 | 0.6708 | | 0.6051 | 8.0 | 8000 | 0.7651 | 0.6808 | 0.6782 | | 0.5926 | 9.0 | 9000 | 0.7566 | 0.6792 | 0.6772 | | 0.547 | 10.0 | 10000 | 0.7746 | 0.6858 | 0.6830 | | 0.5197 | 11.0 | 11000 | 0.7924 | 0.6733 | 0.6697 | | 0.4762 | 12.0 | 12000 | 0.8036 | 0.685 | 0.6817 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
enigma-112/akai_ner
enigma-112
2024-05-23T20:06:26Z
116
0
transformers
[ "transformers", "safetensors", "distilbert", "token-classification", "agriculture", "en", "dataset:ksgr5566/ner", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-23T18:03:54Z
--- license: apache-2.0 language: - en library_name: transformers pipeline_tag: token-classification tags: - agriculture datasets: - ksgr5566/ner metrics: - f1 widget: - text: "Which are the best varities of tomatoes?" example_title: "Example- crop" - text: "My paddy has been attacked by grasshopper." example_title: "Example- crop + pest" - text: "Where can I buy draught resistant paddy ?" example_title: "Example- seed type" --- # Model Card for Model ID The model recognizes 3 types of entities - pest names, seed types, crop names. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** SamagraX | Transforming Governance - **Shared by [optional]:** SamagraX | Transforming Governance - **Model type:** Named Entity Recognition for agriculture - **Language(s) (NLP):** Python - **License:** MIT - **Finetuned from model [optional]:** distilbert-base-uncased ## Uses Helps extracting the crop name, pest names and seed details from a query asked.
yihanwang617/tinyllama-sft-vicuna-random-100k
yihanwang617
2024-05-23T20:04:21Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:yihanwang617/vicuna_sub_random_100k", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "base_model:finetune:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T07:22:14Z
--- license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - yihanwang617/vicuna_sub_random_100k model-index: - name: tinyllama-sft-vicuna-random-100k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-sft-vicuna-random-100k This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the yihanwang617/vicuna_sub_random_100k dataset. It achieves the following results on the evaluation set: - Loss: 0.7457 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7119 | 1.0 | 732 | 0.7457 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.14.6 - Tokenizers 0.15.0
lenatr99/loha_fine_tuned_copa_XLMroberta
lenatr99
2024-05-23T20:03:52Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:adapter:FacebookAI/xlm-roberta-base", "license:mit", "region:us" ]
null
2024-05-23T20:03:43Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: xlm-roberta-base metrics: - accuracy - f1 model-index: - name: loha_fine_tuned_copa_XLMroberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # loha_fine_tuned_copa_XLMroberta This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6928 - Accuracy: 0.56 - F1: 0.5589 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6957 | 1.0 | 50 | 0.6928 | 0.56 | 0.5575 | | 0.6944 | 2.0 | 100 | 0.6928 | 0.56 | 0.5589 | | 0.6904 | 3.0 | 150 | 0.6928 | 0.56 | 0.5589 | | 0.6902 | 4.0 | 200 | 0.6928 | 0.56 | 0.5589 | | 0.6948 | 5.0 | 250 | 0.6928 | 0.56 | 0.5589 | | 0.6961 | 6.0 | 300 | 0.6928 | 0.56 | 0.5589 | | 0.6979 | 7.0 | 350 | 0.6928 | 0.56 | 0.5589 | | 0.6903 | 8.0 | 400 | 0.6928 | 0.56 | 0.5589 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
lenatr99/lora_fine_tuned_copa_XLMroberta
lenatr99
2024-05-23T20:01:56Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:adapter:FacebookAI/xlm-roberta-base", "license:mit", "region:us" ]
null
2024-05-23T20:01:53Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: xlm-roberta-base metrics: - accuracy - f1 model-index: - name: lora_fine_tuned_copa_XLMroberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora_fine_tuned_copa_XLMroberta This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6929 - Accuracy: 0.58 - F1: 0.58 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6951 | 1.0 | 50 | 0.6928 | 0.56 | 0.5589 | | 0.6949 | 2.0 | 100 | 0.6928 | 0.58 | 0.5790 | | 0.6899 | 3.0 | 150 | 0.6928 | 0.59 | 0.5895 | | 0.6906 | 4.0 | 200 | 0.6929 | 0.58 | 0.58 | | 0.6953 | 5.0 | 250 | 0.6929 | 0.58 | 0.58 | | 0.6949 | 6.0 | 300 | 0.6929 | 0.58 | 0.58 | | 0.6985 | 7.0 | 350 | 0.6929 | 0.58 | 0.58 | | 0.6912 | 8.0 | 400 | 0.6929 | 0.58 | 0.58 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
isaaclee/duration_mistral_train_run3
isaaclee
2024-05-23T19:57:28Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
null
2024-05-23T18:16:43Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.1 model-index: - name: duration_mistral_train_run3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # duration_mistral_train_run3 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
hgnoi/1fLjPnSpnrOruhEL
hgnoi
2024-05-23T19:55:51Z
165
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T19:54:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GovindJo/Fine_Tune_T5_Model_News_Summarization
GovindJo
2024-05-23T19:50:16Z
63
0
transformers
[ "transformers", "tf", "t5", "text2text-generation", "generated_from_keras_callback", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-23T11:26:32Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_keras_callback model-index: - name: GovindJo/Fine_Tune_T5_Model_News_Summarization results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # GovindJo/Fine_Tune_T5_Model_News_Summarization This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.8715 - Validation Loss: 1.6797 - Train Lr: 2e-05 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Lr | Epoch | |:----------:|:---------------:|:--------:|:-----:| | 1.9350 | 1.7003 | 2e-05 | 0 | | 1.8854 | 1.6873 | 2e-05 | 1 | | 1.8715 | 1.6797 | 2e-05 | 2 | ### Framework versions - Transformers 4.39.3 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
lenatr99/loha_fine_tuned_copa_sloberta
lenatr99
2024-05-23T19:49:21Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:EMBEDDIA/sloberta", "base_model:adapter:EMBEDDIA/sloberta", "license:cc-by-sa-4.0", "region:us" ]
null
2024-05-23T19:49:19Z
--- license: cc-by-sa-4.0 library_name: peft tags: - generated_from_trainer base_model: EMBEDDIA/sloberta metrics: - accuracy - f1 model-index: - name: loha_fine_tuned_copa_sloberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # loha_fine_tuned_copa_sloberta This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6929 - Accuracy: 0.51 - F1: 0.5104 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7001 | 1.0 | 50 | 0.6935 | 0.47 | 0.4711 | | 0.6932 | 2.0 | 100 | 0.6931 | 0.48 | 0.48 | | 0.6913 | 3.0 | 150 | 0.6928 | 0.51 | 0.5112 | | 0.7055 | 4.0 | 200 | 0.6927 | 0.48 | 0.48 | | 0.6902 | 5.0 | 250 | 0.6926 | 0.5 | 0.4988 | | 0.6981 | 6.0 | 300 | 0.6930 | 0.47 | 0.4694 | | 0.7011 | 7.0 | 350 | 0.6930 | 0.5 | 0.4988 | | 0.6894 | 8.0 | 400 | 0.6929 | 0.51 | 0.5104 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
tsavage68/MedQA_L3_300steps_1e6rate_01beta_CSFTDPO
tsavage68
2024-05-23T19:49:16Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T19:45:11Z
--- license: llama3 base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: MedQA_L3_300steps_1e6rate_01beta_CSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MedQA_L3_300steps_1e6rate_01beta_CSFTDPO This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4661 - Rewards/chosen: 0.6273 - Rewards/rejected: -0.3771 - Rewards/accuracies: 0.7604 - Rewards/margins: 1.0045 - Logps/rejected: -37.6261 - Logps/chosen: -25.0552 - Logits/rejected: -0.8801 - Logits/chosen: -0.8780 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6869 | 0.0489 | 50 | 0.6696 | -0.2211 | -0.2710 | 0.7253 | 0.0498 | -36.5645 | -33.5400 | -0.7298 | -0.7290 | | 0.4779 | 0.0977 | 100 | 0.5887 | 1.4526 | 1.0417 | 0.6945 | 0.4109 | -23.4374 | -16.8024 | -0.8047 | -0.8036 | | 0.5155 | 0.1466 | 150 | 0.4976 | 0.6394 | -0.2000 | 0.7363 | 0.8394 | -35.8551 | -24.9343 | -0.8636 | -0.8617 | | 0.4245 | 0.1954 | 200 | 0.4924 | 0.0477 | -0.9077 | 0.7648 | 0.9554 | -42.9321 | -30.8513 | -0.8783 | -0.8762 | | 0.4563 | 0.2443 | 250 | 0.4675 | 0.6549 | -0.3364 | 0.7560 | 0.9913 | -37.2189 | -24.7791 | -0.8807 | -0.8786 | | 0.3066 | 0.2931 | 300 | 0.4661 | 0.6273 | -0.3771 | 0.7604 | 1.0045 | -37.6261 | -25.0552 | -0.8801 | -0.8780 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.0.0+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
moncefem/guillaumetell-7b-awq
moncefem
2024-05-23T19:48:50Z
77
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-05-23T19:45:41Z
--- license: apache-2.0 ---
lenatr99/lora_fine_tuned_copa_sloberta
lenatr99
2024-05-23T19:47:55Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:EMBEDDIA/sloberta", "base_model:adapter:EMBEDDIA/sloberta", "license:cc-by-sa-4.0", "region:us" ]
null
2024-05-23T19:47:50Z
--- license: cc-by-sa-4.0 library_name: peft tags: - generated_from_trainer base_model: EMBEDDIA/sloberta metrics: - accuracy - f1 model-index: - name: lora_fine_tuned_copa_sloberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora_fine_tuned_copa_sloberta This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6931 - Accuracy: 0.55 - F1: 0.5509 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7011 | 1.0 | 50 | 0.6931 | 0.52 | 0.5188 | | 0.6953 | 2.0 | 100 | 0.6931 | 0.59 | 0.5895 | | 0.6981 | 3.0 | 150 | 0.6931 | 0.57 | 0.5704 | | 0.6978 | 4.0 | 200 | 0.6931 | 0.55 | 0.5495 | | 0.692 | 5.0 | 250 | 0.6931 | 0.54 | 0.5411 | | 0.7014 | 6.0 | 300 | 0.6931 | 0.48 | 0.48 | | 0.6928 | 7.0 | 350 | 0.6931 | 0.5 | 0.5012 | | 0.6953 | 8.0 | 400 | 0.6931 | 0.55 | 0.5509 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
bert-base/fun_trained_convbert_epoch_7
bert-base
2024-05-23T19:45:40Z
196
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-23T19:43:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vincentpremise/mps-invoice-product-BAAI_bge_small_en_v1.5
vincentpremise
2024-05-23T19:38:45Z
7
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-23T19:04:11Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # vincentpremise/mps-invoice-product-BAAI_bge_small_en_v1.5 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('vincentpremise/mps-invoice-product-BAAI_bge_small_en_v1.5') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('vincentpremise/mps-invoice-product-BAAI_bge_small_en_v1.5') model = AutoModel.from_pretrained('vincentpremise/mps-invoice-product-BAAI_bge_small_en_v1.5') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=vincentpremise/mps-invoice-product-BAAI_bge_small_en_v1.5) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 94 with parameters: ``` {'batch_size': 32} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 10, "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 94, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
lenatr99/loha_fine_tuned_copa_croslo
lenatr99
2024-05-23T19:38:44Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:EMBEDDIA/crosloengual-bert", "base_model:adapter:EMBEDDIA/crosloengual-bert", "license:cc-by-4.0", "region:us" ]
null
2024-05-03T18:18:03Z
--- license: cc-by-4.0 library_name: peft tags: - generated_from_trainer base_model: EMBEDDIA/crosloengual-bert metrics: - accuracy - f1 model-index: - name: loha_fine_tuned_copa_croslo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # loha_fine_tuned_copa_croslo This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3989 - Accuracy: 0.65 - F1: 0.6507 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7116 | 1.0 | 50 | 0.6663 | 0.57 | 0.5704 | | 0.657 | 2.0 | 100 | 0.6277 | 0.62 | 0.62 | | 0.49 | 3.0 | 150 | 0.6608 | 0.67 | 0.6708 | | 0.3144 | 4.0 | 200 | 0.7786 | 0.68 | 0.6805 | | 0.099 | 5.0 | 250 | 1.1639 | 0.63 | 0.6303 | | 0.0377 | 6.0 | 300 | 1.2560 | 0.65 | 0.6496 | | 0.0216 | 7.0 | 350 | 1.3663 | 0.65 | 0.6503 | | 0.0107 | 8.0 | 400 | 1.3989 | 0.65 | 0.6507 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
tsavage68/MedQA_L3_1000steps_1e5rate_03beta_CSFTDPO
tsavage68
2024-05-23T19:38:42Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T19:34:12Z
--- license: llama3 base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: MedQA_L3_1000steps_1e5rate_03beta_CSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MedQA_L3_1000steps_1e5rate_03beta_CSFTDPO This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8199 - Rewards/chosen: -5.6953 - Rewards/rejected: -5.2697 - Rewards/accuracies: 0.4571 - Rewards/margins: -0.4255 - Logps/rejected: -51.4207 - Logps/chosen: -50.3128 - Logits/rejected: -1.1748 - Logits/chosen: -1.1747 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6976 | 0.0489 | 50 | 1.6003 | -6.0871 | -6.7321 | 0.5626 | 0.6450 | -56.2952 | -51.6189 | -0.8478 | -0.8474 | | 2.0492 | 0.0977 | 100 | 1.5171 | -2.8937 | -2.7957 | 0.4791 | -0.0979 | -43.1739 | -40.9741 | -0.7086 | -0.7085 | | 3.2675 | 0.1466 | 150 | 2.4839 | -9.5405 | -8.8952 | 0.4264 | -0.6452 | -63.5056 | -63.1301 | -0.6090 | -0.6092 | | 2.5387 | 0.1954 | 200 | 2.8407 | -10.8845 | -10.2333 | 0.4220 | -0.6513 | -67.9657 | -67.6103 | -2.0451 | -2.0454 | | 3.5954 | 0.2443 | 250 | 5.2964 | -26.2267 | -26.1016 | 0.4725 | -0.1251 | -120.8603 | -118.7509 | -2.7907 | -2.7903 | | 5.2171 | 0.2931 | 300 | 3.1156 | -11.9636 | -11.4341 | 0.4549 | -0.5294 | -71.9686 | -71.2070 | -1.4795 | -1.4797 | | 2.6671 | 0.3420 | 350 | 2.8765 | -8.6508 | -8.1258 | 0.4220 | -0.5250 | -60.9407 | -60.1644 | -0.9503 | -0.9502 | | 3.7894 | 0.3908 | 400 | 2.8694 | -9.8779 | -9.1060 | 0.4242 | -0.7720 | -64.2081 | -64.2550 | -1.0926 | -1.0927 | | 4.4115 | 0.4397 | 450 | 2.6152 | -9.1581 | -8.5492 | 0.4176 | -0.6089 | -62.3523 | -61.8555 | -1.3932 | -1.3933 | | 3.6882 | 0.4885 | 500 | 2.5995 | -10.0842 | -9.5563 | 0.4352 | -0.5279 | -65.7092 | -64.9425 | -1.3920 | -1.3918 | | 4.7478 | 0.5374 | 550 | 3.1439 | -13.8538 | -13.2693 | 0.4264 | -0.5845 | -78.0858 | -77.5078 | -1.4673 | -1.4673 | | 3.6453 | 0.5862 | 600 | 2.5501 | -10.1562 | -9.6020 | 0.4154 | -0.5542 | -65.8615 | -65.1824 | -1.8008 | -1.8006 | | 1.9093 | 0.6351 | 650 | 2.0900 | -7.1034 | -6.4496 | 0.4352 | -0.6537 | -55.3536 | -55.0064 | -1.5307 | -1.5306 | | 1.978 | 0.6839 | 700 | 1.9643 | -5.1638 | -4.6928 | 0.4593 | -0.4710 | -49.4976 | -48.5413 | -1.2420 | -1.2419 | | 2.6252 | 0.7328 | 750 | 1.8926 | -6.6759 | -6.1506 | 0.4396 | -0.5254 | -54.3567 | -53.5815 | -1.3560 | -1.3560 | | 2.0384 | 0.7816 | 800 | 1.8552 | -6.4512 | -5.9923 | 0.4374 | -0.4588 | -53.8292 | -52.8324 | -1.2189 | -1.2188 | | 2.3167 | 0.8305 | 850 | 1.8255 | -5.8191 | -5.3851 | 0.4549 | -0.4341 | -51.8050 | -50.7256 | -1.1902 | -1.1901 | | 2.1526 | 0.8793 | 900 | 1.8196 | -5.7219 | -5.2966 | 0.4549 | -0.4252 | -51.5102 | -50.4014 | -1.1751 | -1.1750 | | 2.0182 | 0.9282 | 950 | 1.8220 | -5.6982 | -5.2706 | 0.4593 | -0.4276 | -51.4235 | -50.3224 | -1.1750 | -1.1749 | | 1.3984 | 0.9770 | 1000 | 1.8199 | -5.6953 | -5.2697 | 0.4571 | -0.4255 | -51.4207 | -50.3128 | -1.1748 | -1.1747 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.0.0+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
lmstudio-community/aya-23-35B-GGUF
lmstudio-community
2024-05-23T19:38:06Z
355
14
transformers
[ "transformers", "gguf", "text-generation", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "el", "fa", "pl", "id", "cs", "he", "hi", "nl", "ro", "ru", "tr", "uk", "vi", "base_model:CohereForAI/aya-23-35B", "base_model:quantized:CohereForAI/aya-23-35B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-05-23T19:33:21Z
--- library_name: transformers language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi license: cc-by-nc-4.0 quantized_by: bartowski pipeline_tag: text-generation lm_studio: param_count: 35b use_case: general release_date: 23-05-2024 model_creator: CohereForAI prompt_template: Cohere Command R system_prompt: You are a helpful AI assistant base_model: cohere original_repo: CohereForAI/aya-23-35B base_model: CohereForAI/aya-23-35B --- ## 💫 Community Model> Aya 23 35B by Cohere For AI *👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*. **Model creator:** [Cohere For AI](https://huggingface.co/CohereForAI)<br> **Original model**: [aya-23-35B](https://huggingface.co/CohereForAI/aya-23-35B)<br> **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2965](https://github.com/ggerganov/llama.cpp/releases/tag/b2965)<br> ## Model Summary: Aya 23 are brand new instruction tuned multilingual models from Cohere. This model should perform well at logic across a wide variety of languages.<br> This is the 35B version of the model. The performance is quite high especially when used for multilingual tasks where most other models in this size range lack training data. ## Prompt template: Choose the `Cohere Command R` preset in your LM Studio. Under the hood, the model will see a prompt that's formatted like so: ``` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|> {system_prompt} <|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|> {prompt} <|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ``` ## Technical Details Aya 23 covers the following languages: - Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese The Aya training dataset can be found here: - https://huggingface.co/datasets/CohereForAI/aya_collection More technical details can be found from Cohere [here](https://cohere.com/research/papers/aya-command-23-35b-and-35b-technical-report-2024-05-23) ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. 🙏 Special thanks to [Kalomaze](https://github.com/kalomaze), [Dampf](https://github.com/Dampfinchen) and [turboderp](https://github.com/turboderp/) for their work on the dataset (linked [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)) that was used for calculating the imatrix for all sizes. ## Disclaimers LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
yifanxie/able-ara-1
yifanxie
2024-05-23T19:37:35Z
147
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "conversational", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-23T19:35:09Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [google/gemma-2b](https://huggingface.co/google/gemma-2b) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed. ```bash pip install transformers==4.40.2 ``` Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo. - Either leave `token=True` in the `pipeline` and login to hugginface_hub by running ```python import huggingface_hub huggingface_hub.login(<ACCESS_TOKEN>) ``` - Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline` ```python from transformers import pipeline generate_text = pipeline( model="yifanxie/able-ara-1", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, token=True, ) # generate configuration can be modified to your needs # generate_text.model.generation_config.min_new_tokens = 2 # generate_text.model.generation_config.max_new_tokens = 486 # generate_text.model.generation_config.do_sample = False # generate_text.model.generation_config.num_beams = 1 # generate_text.model.generation_config.temperature = float(0.0) # generate_text.model.generation_config.repetition_penalty = float(1.0) messages = [ { "role": "system", "content": "You are a friendly and polite chatbot.", }, {"role": "user", "content": "Hi, how are you?"}, {"role": "assistant", "content": "I'm doing great, how about you?"}, {"role": "user", "content": "Why is drinking water so healthy?"}, ] res = generate_text( messages, renormalize_logits=True ) print(res[0]["generated_text"][-1]['content']) ``` You can print a sample prompt after applying chat template to see how it is feed to the tokenizer: ```python print(generate_text.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, )) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "yifanxie/able-ara-1" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. messages = [ { "role": "system", "content": "You are a friendly and polite chatbot.", }, {"role": "user", "content": "Hi, how are you?"}, {"role": "assistant", "content": "I'm doing great, how about you?"}, {"role": "user", "content": "Why is drinking water so healthy?"}, ] tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() # generate configuration can be modified to your needs # model.generation_config.min_new_tokens = 2 # model.generation_config.max_new_tokens = 486 # model.generation_config.do_sample = False # model.generation_config.num_beams = 1 # model.generation_config.temperature = float(0.0) # model.generation_config.repetition_penalty = float(1.0) inputs = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True, ).to("cuda") tokens = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Quantization and sharding You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```. ## Model Architecture ``` GemmaForCausalLM( (model): GemmaModel( (embed_tokens): Embedding(256000, 2048, padding_idx=0) (layers): ModuleList( (0-17): 18 x GemmaDecoderLayer( (self_attn): GemmaSdpaAttention( (q_proj): Linear(in_features=2048, out_features=2048, bias=False) (k_proj): Linear(in_features=2048, out_features=256, bias=False) (v_proj): Linear(in_features=2048, out_features=256, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): GemmaRotaryEmbedding() ) (mlp): GemmaMLP( (gate_proj): Linear(in_features=2048, out_features=16384, bias=False) (up_proj): Linear(in_features=2048, out_features=16384, bias=False) (down_proj): Linear(in_features=16384, out_features=2048, bias=False) (act_fn): PytorchGELUTanh() ) (input_layernorm): GemmaRMSNorm() (post_attention_layernorm): GemmaRMSNorm() ) ) (norm): GemmaRMSNorm() ) (lm_head): Linear(in_features=2048, out_features=256000, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
PhillipGuo/hp-lat-llama-No_PCA-epsilon0.5-pgd_layer8_16_24_30-def_layer0-ultrachat-towards1-away0-sft1-8
PhillipGuo
2024-05-23T19:30:09Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T19:30:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PhillipGuo/hp-lat-llama-No_PCA-epsilon3.0-pgd_layer8_16_24_30-def_layer0-ultrachat-towards1-away0-sft1-8
PhillipGuo
2024-05-23T19:29:29Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T19:29:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SoumilB7/MathematicalLlama8B
SoumilB7
2024-05-23T19:25:17Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "trl", "mathematical-reasoning", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-23T19:10:37Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - llama - trl - mathematical-reasoning --- # Uploaded model - **Developed by:** SoumilB7 - **License:** apache-2.0
SuYee189/mGPT_V5
SuYee189
2024-05-23T19:18:34Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T19:18:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Rimyy/MISTRAL-finetuneGSMdata3exp
Rimyy
2024-05-23T19:18:16Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T19:14:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
smmgh/distilbert-base-uncased-finetuned-ner
smmgh
2024-05-23T19:11:53Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-23T17:07:15Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1955 - Precision: 0.7462 - Recall: 0.7843 - F1: 0.7648 - Accuracy: 0.9395 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.198 | 1.0 | 8236 | 0.1907 | 0.7312 | 0.7762 | 0.7530 | 0.9372 | | 0.1611 | 2.0 | 16472 | 0.1899 | 0.7492 | 0.7795 | 0.7641 | 0.9395 | | 0.1324 | 3.0 | 24708 | 0.1955 | 0.7462 | 0.7843 | 0.7648 | 0.9395 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
pchopalli/whisper-small-or
pchopalli
2024-05-23T19:11:48Z
52
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "or", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T19:10:45Z
--- language: - or license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small Oriya - Prashant C results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: or split: test args: 'config: bg, split: test' metrics: - name: Wer type: wer value: 26.817933296883545 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Oriya - Prashant C This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3176 - Wer Ortho: 61.6491 - Wer: 26.8179 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:| | 0.0116 | 9.6154 | 500 | 0.3176 | 61.6491 | 26.8179 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
UtkuCicek/new_marks
UtkuCicek
2024-05-23T19:09:06Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers-training", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-22T17:42:28Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers-training - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Text-to-image finetuning - UtkuCicek/new_marks This pipeline was finetuned from **stabilityai/stable-diffusion-xl-base-1.0** on the **UtkuCicek/new-marks-data** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: italian style mini pizza with mozerrella on the side: ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) Special VAE used for training: None. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
chongdashu/007-alpaca-gemma-7b-16bit_q4_k_m
chongdashu
2024-05-23T19:06:23Z
3
0
transformers
[ "transformers", "gguf", "gemma", "text-generation-inference", "unsloth", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "base_model:quantized:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T19:03:58Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - gguf base_model: unsloth/gemma-7b-bnb-4bit --- # Uploaded model - **Developed by:** chongdashu - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
dreamgen/llama3-8b-instruct-align-test1-kto
dreamgen
2024-05-23T19:05:55Z
67
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:cc", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T18:49:02Z
--- license: cc --- - **What is this?** Nothing interesting, just an experiment. - **License:** CC-BY-NC Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Aliquet nibh praesent tristique magna sit amet purus. Lacus luctus accumsan tortor posuere ac ut consequat semper. Netus et malesuada fames ac turpis egestas sed. Augue mauris augue neque gravida in. Leo vel orci porta non pulvinar neque. Sagittis purus sit amet volutpat consequat mauris nunc. Sit amet porttitor eget dolor morbi non arcu risus quis. Rutrum quisque non tellus orci. Nunc mi ipsum faucibus vitae aliquet nec ullamcorper sit amet. Diam phasellus vestibulum lorem sed. Et odio pellentesque diam volutpat commodo sed egestas. Commodo sed egestas egestas fringilla. Fringilla phasellus faucibus scelerisque eleifend donec pretium. Ipsum dolor sit amet consectetur adipiscing elit pellentesque habitant. Ac feugiat sed lectus vestibulum mattis ullamcorper velit sed. Ut etiam sit amet nisl purus in mollis nunc sed. Dolor sit amet consectetur adipiscing elit duis tristique sollicitudin nibh. Nibh nisl condimentum id venenatis a condimentum vitae sapien. Eget mi proin sed libero. Nibh mauris cursus mattis molestie a iaculis at erat. Et egestas quis ipsum suspendisse ultrices. Facilisi etiam dignissim diam quis enim lobortis scelerisque fermentum. Iaculis nunc sed augue lacus viverra. Elementum sagittis vitae et leo. Eu consequat ac felis donec et odio. Amet mattis vulputate enim nulla aliquet porttitor lacus luctus accumsan. Enim sit amet venenatis urna. Aenean pharetra magna ac placerat vestibulum lectus mauris. Sit amet aliquam id diam maecenas ultricies. Morbi leo urna molestie at elementum eu facilisis. Augue ut lectus arcu bibendum at varius vel pharetra. At lectus urna duis convallis convallis tellus. Duis at tellus at urna condimentum. Quis hendrerit dolor magna eget est lorem. Quis vel eros donec ac odio tempor. Sit amet dictum sit amet justo donec enim diam. Faucibus in ornare quam viverra. Sit amet commodo nulla facilisi nullam vehicula ipsum a arcu. Id velit ut tortor pretium viverra suspendisse. Sit amet nisl purus in mollis nunc sed id. Nec tincidunt praesent semper feugiat nibh sed pulvinar proin. Molestie nunc non blandit massa. Amet dictum sit amet justo donec. Est ante in nibh mauris cursus mattis molestie. Tristique senectus et netus et malesuada. Sit amet mauris commodo quis imperdiet. Nisi est sit amet facilisis magna etiam tempor orci. Sagittis id consectetur purus ut faucibus pulvinar. Sit amet nulla facilisi morbi tempus iaculis urna. Nulla porttitor massa id neque. Faucibus turpis in eu mi bibendum neque egestas congue quisque. Eu feugiat pretium nibh ipsum consequat nisl vel pretium. Ut lectus arcu bibendum at. At ultrices mi tempus imperdiet nulla malesuada pellentesque. Eget nullam non nisi est sit. Ante in nibh mauris cursus mattis molestie a. Hendrerit dolor magna eget est lorem ipsum dolor. Odio pellentesque diam volutpat commodo sed. Mauris vitae ultricies leo integer. Enim nunc faucibus a pellentesque sit amet porttitor eget. Bibendum neque egestas congue quisque egestas diam in arcu. Velit euismod in pellentesque massa placerat duis ultricies lacus sed. Tincidunt eget nullam non nisi est sit amet facilisis magna. Neque volutpat ac tincidunt vitae semper. Viverra mauris in aliquam sem fringilla. Purus faucibus ornare suspendisse sed nisi lacus sed. Netus et malesuada fames ac turpis.
jenniellama/Qwen-Qwen1.5-1.8B1716490325
jenniellama
2024-05-23T18:54:46Z
149
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T18:52:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
xuliu15/FT-English-1h
xuliu15
2024-05-23T18:50:44Z
15
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:librispeech-clean", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T12:10:05Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - librispeech-clean metrics: - wer model-index: - name: Whisper Small English 1h results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Librispeech type: librispeech-clean config: default split: None args: 'config: english, split: test' metrics: - name: Wer type: wer value: 53.45675203126608 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small English 1h This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Librispeech dataset. It achieves the following results on the evaluation set: - Loss: 1.8110 - Wer: 53.4568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0582 | 10.0 | 200 | 1.8847 | 56.4620 | | 0.0495 | 20.0 | 400 | 1.8598 | 55.1579 | | 0.042 | 30.0 | 600 | 1.8303 | 54.2240 | | 0.0309 | 40.0 | 800 | 1.8152 | 53.7118 | | 0.0323 | 50.0 | 1000 | 1.8110 | 53.4568 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
mNLP-project/gpt2-finetuned
mNLP-project
2024-05-23T18:49:14Z
18
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T08:09:06Z
--- license: mit tags: - generated_from_trainer metrics: - bleu base_model: openai-community/gpt2 model-index: - name: gpt2-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-finetuned This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6944 - Bleu: 0.0294 - Bertscore Precision: 0.1536 - Bertscore Recall: 0.1658 - Bertscore F1: 0.1592 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Bertscore Precision | Bertscore Recall | Bertscore F1 | |:-------------:|:-----:|:------:|:---------------:|:------:|:-------------------:|:----------------:|:------------:| | 4.716 | 1.0 | 5750 | 3.4413 | 0.0112 | 0.1417 | 0.1575 | 0.1489 | | 4.5916 | 2.0 | 11500 | 3.2372 | 0.0119 | 0.1424 | 0.1583 | 0.1496 | | 4.325 | 3.0 | 17250 | 3.0534 | 0.0128 | 0.1430 | 0.1587 | 0.1501 | | 4.1626 | 4.0 | 23000 | 2.9061 | 0.0136 | 0.1433 | 0.1592 | 0.1505 | | 4.0255 | 5.0 | 28750 | 2.7554 | 0.0148 | 0.1438 | 0.1599 | 0.1511 | | 3.862 | 6.0 | 34500 | 2.6185 | 0.0346 | 0.1446 | 0.1605 | 0.1518 | | 3.7367 | 7.0 | 40250 | 2.4945 | 0.0286 | 0.1456 | 0.1611 | 0.1527 | | 3.7907 | 8.0 | 46000 | 2.3799 | 0.0401 | 0.1488 | 0.1617 | 0.1548 | | 3.5181 | 9.0 | 51750 | 2.2704 | 0.0607 | 0.1490 | 0.1623 | 0.1551 | | 3.3377 | 10.0 | 57500 | 2.1710 | 0.0804 | 0.1498 | 0.1627 | 0.1558 | | 3.294 | 11.0 | 63250 | 2.0876 | 0.0221 | 0.1512 | 0.1633 | 0.1568 | | 3.1612 | 12.0 | 69000 | 2.0004 | 0.0234 | 0.1516 | 0.1637 | 0.1572 | | 3.1257 | 13.0 | 74750 | 1.9356 | 0.0244 | 0.1518 | 0.1642 | 0.1575 | | 3.1347 | 14.0 | 80500 | 1.8769 | 0.0257 | 0.1525 | 0.1646 | 0.1581 | | 2.8094 | 15.0 | 86250 | 1.8210 | 0.0268 | 0.1527 | 0.1649 | 0.1584 | | 2.8519 | 16.0 | 92000 | 1.7776 | 0.0275 | 0.1530 | 0.1652 | 0.1587 | | 2.782 | 17.0 | 97750 | 1.7438 | 0.0282 | 0.1532 | 0.1654 | 0.1589 | | 2.9097 | 18.0 | 103500 | 1.7183 | 0.0289 | 0.1535 | 0.1657 | 0.1591 | | 2.881 | 19.0 | 109250 | 1.6999 | 0.0293 | 0.1536 | 0.1658 | 0.1592 | | 2.6302 | 20.0 | 115000 | 1.6944 | 0.0294 | 0.1536 | 0.1658 | 0.1592 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
edinlp/zephyr-full-dpo
edinlp
2024-05-23T18:47:31Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T18:25:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nisargvp/hc-mistral-alpaca
nisargvp
2024-05-23T18:44:48Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2024-05-23T17:42:03Z
--- license: apache-2.0 library_name: peft tags: - axolotl - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: hc-mistral-alpaca results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: mistralai/Mistral-7B-v0.1 model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer is_mistral_derived_model: true load_in_8bit: false load_in_4bit: true strict: false lora_fan_in_fan_out: false data_seed: 49 seed: 49 datasets: - path: _synth_data/alpaca_synth_queries_healed.jsonl type: sharegpt conversation: alpaca shards: 10 # This will divide the dataset into 10 shards shards_idx: 2 # This will load only the 3rd shard (indexing starts from 0) dataset_prepared_path: last_run_prepared val_set_size: 0.1 output_dir: ./qlora-alpaca-out hub_model_id: nisargvp/hc-mistral-alpaca adapter: qlora lora_model_dir: sequence_len: 896 sample_packing: true pad_to_sequence_len: true lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: lora_target_modules: - gate_proj - down_proj - up_proj - q_proj - v_proj - k_proj - o_proj wandb_project: hc-axolotl-mistral wandb_entity: nisargvp gradient_accumulation_steps: 4 micro_batch_size: 16 eval_batch_size: 16 num_epochs: 1 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 max_grad_norm: 1.0 adam_beta2: 0.95 adam_epsilon: 0.00001 save_total_limit: 12 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true loss_watchdog_threshold: 5.0 loss_watchdog_patience: 3 warmup_steps: 20 evals_per_epoch: 4 eval_table_size: eval_table_max_new_tokens: 128 saves_per_epoch: 6 debug: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>" save_safetensors: true ``` </details><br> # hc-mistral-alpaca This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1384 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 49 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 20 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1477 | 0.0098 | 1 | 1.1538 | | 0.1856 | 0.2537 | 26 | 0.1796 | | 0.1554 | 0.5073 | 52 | 0.1488 | | 0.1364 | 0.7610 | 78 | 0.1384 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.1.2+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1
BWangila/ppo-LunarLander-v2_01
BWangila
2024-05-23T18:44:01Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-23T18:43:48Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 275.41 +/- 21.44 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ZaneHorrible/Adam_ViTL-32_224-2e-4-batch_16_epoch_4_classes_24
ZaneHorrible
2024-05-23T18:42:32Z
197
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-large-patch32-224-in21k", "base_model:finetune:google/vit-large-patch32-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-23T17:24:16Z
--- license: apache-2.0 base_model: google/vit-large-patch32-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: Adam_ViTL-32_224-2e-4-batch_16_epoch_4_classes_24 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9482758620689655 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Adam_ViTL-32_224-2e-4-batch_16_epoch_4_classes_24 This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2083 - Accuracy: 0.9483 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7984 | 0.07 | 100 | 0.8039 | 0.8606 | | 0.4352 | 0.14 | 200 | 0.5735 | 0.8463 | | 0.3651 | 0.21 | 300 | 0.3951 | 0.8937 | | 0.3133 | 0.28 | 400 | 0.4525 | 0.8894 | | 0.2641 | 0.35 | 500 | 0.3618 | 0.9023 | | 0.2104 | 0.42 | 600 | 0.4240 | 0.8922 | | 0.1787 | 0.49 | 700 | 0.4070 | 0.8879 | | 0.1412 | 0.56 | 800 | 0.3259 | 0.9124 | | 0.2121 | 0.63 | 900 | 0.3575 | 0.8994 | | 0.1491 | 0.7 | 1000 | 0.2769 | 0.9152 | | 0.268 | 0.77 | 1100 | 0.3432 | 0.9195 | | 0.2378 | 0.84 | 1200 | 0.3622 | 0.9109 | | 0.0812 | 0.91 | 1300 | 0.2857 | 0.9210 | | 0.127 | 0.97 | 1400 | 0.2787 | 0.9253 | | 0.0256 | 1.04 | 1500 | 0.3116 | 0.9267 | | 0.027 | 1.11 | 1600 | 0.2889 | 0.9282 | | 0.0508 | 1.18 | 1700 | 0.3048 | 0.9310 | | 0.0932 | 1.25 | 1800 | 0.2732 | 0.9382 | | 0.0745 | 1.32 | 1900 | 0.3275 | 0.9195 | | 0.0675 | 1.39 | 2000 | 0.2505 | 0.9440 | | 0.0347 | 1.46 | 2100 | 0.2686 | 0.9382 | | 0.0121 | 1.53 | 2200 | 0.2888 | 0.9454 | | 0.1104 | 1.6 | 2300 | 0.2375 | 0.9440 | | 0.0778 | 1.67 | 2400 | 0.2345 | 0.9411 | | 0.0029 | 1.74 | 2500 | 0.2924 | 0.9282 | | 0.0063 | 1.81 | 2600 | 0.2867 | 0.9353 | | 0.0394 | 1.88 | 2700 | 0.3384 | 0.9224 | | 0.0043 | 1.95 | 2800 | 0.2855 | 0.9195 | | 0.025 | 2.02 | 2900 | 0.3218 | 0.9296 | | 0.0096 | 2.09 | 3000 | 0.2810 | 0.9368 | | 0.0018 | 2.16 | 3100 | 0.1971 | 0.9526 | | 0.0102 | 2.23 | 3200 | 0.2175 | 0.9497 | | 0.0016 | 2.3 | 3300 | 0.2341 | 0.9454 | | 0.0024 | 2.37 | 3400 | 0.2607 | 0.9425 | | 0.0024 | 2.44 | 3500 | 0.2380 | 0.9440 | | 0.0019 | 2.51 | 3600 | 0.2422 | 0.9382 | | 0.0062 | 2.58 | 3700 | 0.2191 | 0.9483 | | 0.0416 | 2.65 | 3800 | 0.2491 | 0.9483 | | 0.002 | 2.72 | 3900 | 0.2201 | 0.9497 | | 0.0013 | 2.79 | 4000 | 0.2242 | 0.9468 | | 0.0012 | 2.86 | 4100 | 0.2182 | 0.9440 | | 0.0011 | 2.92 | 4200 | 0.2079 | 0.9497 | | 0.001 | 2.99 | 4300 | 0.2083 | 0.9483 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
SoorajK1/Data_Analysis_Workflow_model_test
SoorajK1
2024-05-23T18:38:36Z
107
0
transformers
[ "transformers", "onnx", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-23T17:41:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chongdashu/007-alpaca-gemma-7b-q8_0
chongdashu
2024-05-23T18:37:45Z
5
0
transformers
[ "transformers", "gguf", "gemma", "text-generation-inference", "unsloth", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "base_model:quantized:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T18:33:34Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - gguf base_model: unsloth/gemma-7b-bnb-4bit --- # Uploaded model - **Developed by:** chongdashu - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
toghrultahirov/pii_mbert_az
toghrultahirov
2024-05-23T18:36:07Z
21
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-23T15:34:05Z
--- license: apache-2.0 base_model: google-bert/bert-base-multilingual-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: pii_mbert_az results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pii_mbert_az This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1319 - Precision: 0.8726 - Recall: 0.9026 - F1: 0.8874 - Accuracy: 0.9619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: reduce_lr_on_plateau - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 313 | 0.1464 | 0.8797 | 0.8615 | 0.8705 | 0.9587 | | 0.2128 | 2.0 | 626 | 0.1319 | 0.8726 | 0.9026 | 0.8874 | 0.9619 | | 0.2128 | 3.0 | 939 | 0.1461 | 0.8689 | 0.8924 | 0.8805 | 0.9596 | | 0.0783 | 4.0 | 1252 | 0.1529 | 0.8837 | 0.9049 | 0.8942 | 0.9620 | | 0.0443 | 5.0 | 1565 | 0.1921 | 0.8657 | 0.9157 | 0.8900 | 0.9615 | | 0.0443 | 6.0 | 1878 | 0.1647 | 0.8975 | 0.9224 | 0.9098 | 0.9685 | | 0.0201 | 7.0 | 2191 | 0.1725 | 0.8904 | 0.9183 | 0.9041 | 0.9674 | | 0.0098 | 8.0 | 2504 | 0.1766 | 0.8917 | 0.9199 | 0.9056 | 0.9682 | | 0.0098 | 9.0 | 2817 | 0.1756 | 0.8926 | 0.9202 | 0.9062 | 0.9686 | | 0.007 | 10.0 | 3130 | 0.1763 | 0.8916 | 0.9189 | 0.9051 | 0.9684 | | 0.007 | 11.0 | 3443 | 0.1772 | 0.8907 | 0.9183 | 0.9043 | 0.9682 | | 0.007 | 12.0 | 3756 | 0.1773 | 0.8895 | 0.9173 | 0.9032 | 0.9680 | | 0.0067 | 13.0 | 4069 | 0.1775 | 0.8892 | 0.9170 | 0.9029 | 0.9680 | | 0.0067 | 14.0 | 4382 | 0.1775 | 0.8897 | 0.9170 | 0.9032 | 0.9679 | | 0.0062 | 15.0 | 4695 | 0.1775 | 0.8897 | 0.9170 | 0.9032 | 0.9679 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Shotaro30678/sentiment-analysis-ncu-chat-bot
Shotaro30678
2024-05-23T18:33:17Z
0
0
null
[ "safetensors", "region:us" ]
null
2024-05-23T17:06:42Z
## config: ```json num_labels = 7 id2label = { 0: "neutral", 1: "anger", 2: "disgust", 3: "fear", 4: "happiness", 5: "sadness", 6: "surprise" } label2id = { "neutral": 0, "anger": 1, "disgust": 2, "fear": 3, "happiness": 4, "sadness": 5, "surprise": 6 } ```
tsavage68/MedQA_L3_1000steps_1e6rate_01beta_CSFTDPO
tsavage68
2024-05-23T18:30:29Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T18:26:08Z
--- license: llama3 base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: MedQA_L3_1000steps_1e6rate_01beta_CSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MedQA_L3_1000steps_1e6rate_01beta_CSFTDPO This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4143 - Rewards/chosen: -0.2461 - Rewards/rejected: -2.6298 - Rewards/accuracies: 0.8088 - Rewards/margins: 2.3838 - Logps/rejected: -60.1531 - Logps/chosen: -33.7891 - Logits/rejected: -1.3940 - Logits/chosen: -1.3910 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6869 | 0.0489 | 50 | 0.6696 | -0.2211 | -0.2710 | 0.7253 | 0.0498 | -36.5645 | -33.5400 | -0.7298 | -0.7290 | | 0.4779 | 0.0977 | 100 | 0.5887 | 1.4526 | 1.0417 | 0.6945 | 0.4109 | -23.4374 | -16.8024 | -0.8047 | -0.8036 | | 0.5752 | 0.1466 | 150 | 0.4975 | 0.5331 | -0.2997 | 0.7473 | 0.8328 | -36.8518 | -25.9976 | -0.8723 | -0.8705 | | 0.4157 | 0.1954 | 200 | 0.5087 | -0.0815 | -1.0065 | 0.7538 | 0.9250 | -43.9199 | -32.1434 | -0.9039 | -0.9019 | | 0.4271 | 0.2443 | 250 | 0.4619 | 0.5202 | -0.5333 | 0.7648 | 1.0535 | -39.1874 | -26.1265 | -0.9341 | -0.9319 | | 0.3162 | 0.2931 | 300 | 0.4272 | 0.2052 | -1.3157 | 0.8110 | 1.5209 | -47.0122 | -29.2765 | -1.0303 | -1.0281 | | 0.3868 | 0.3420 | 350 | 0.4366 | 0.0191 | -1.4354 | 0.7868 | 1.4545 | -48.2090 | -31.1376 | -1.1172 | -1.1146 | | 0.4267 | 0.3908 | 400 | 0.4253 | 0.8142 | -0.6501 | 0.8044 | 1.4642 | -40.3556 | -23.1869 | -1.2091 | -1.2069 | | 0.4816 | 0.4397 | 450 | 0.4235 | 0.7057 | -0.6954 | 0.7978 | 1.4011 | -40.8093 | -24.2719 | -1.2618 | -1.2590 | | 0.5777 | 0.4885 | 500 | 0.4147 | 0.5199 | -1.2061 | 0.8088 | 1.7260 | -45.9158 | -26.1293 | -1.3148 | -1.3119 | | 0.3051 | 0.5374 | 550 | 0.4133 | 0.2933 | -1.3715 | 0.8022 | 1.6647 | -47.5694 | -28.3956 | -1.3646 | -1.3616 | | 0.5378 | 0.5862 | 600 | 0.4219 | -0.4403 | -2.6925 | 0.8088 | 2.2522 | -60.7803 | -35.7319 | -1.3525 | -1.3496 | | 0.359 | 0.6351 | 650 | 0.4122 | -0.0585 | -2.2242 | 0.8132 | 2.1656 | -56.0965 | -31.9139 | -1.3793 | -1.3763 | | 0.4137 | 0.6839 | 700 | 0.4019 | 0.0561 | -2.0220 | 0.8066 | 2.0781 | -54.0746 | -30.7675 | -1.3921 | -1.3890 | | 0.3899 | 0.7328 | 750 | 0.4093 | -0.1488 | -2.4231 | 0.8110 | 2.2743 | -58.0863 | -32.8165 | -1.3920 | -1.3890 | | 0.3645 | 0.7816 | 800 | 0.4095 | -0.2104 | -2.5505 | 0.8132 | 2.3401 | -59.3594 | -33.4322 | -1.3965 | -1.3935 | | 0.4993 | 0.8305 | 850 | 0.4157 | -0.2412 | -2.6172 | 0.8088 | 2.3760 | -60.0272 | -33.7410 | -1.3947 | -1.3918 | | 0.6907 | 0.8793 | 900 | 0.4164 | -0.2462 | -2.6292 | 0.8110 | 2.3829 | -60.1466 | -33.7908 | -1.3944 | -1.3914 | | 0.3846 | 0.9282 | 950 | 0.4140 | -0.2447 | -2.6315 | 0.8110 | 2.3868 | -60.1702 | -33.7755 | -1.3939 | -1.3909 | | 0.3404 | 0.9770 | 1000 | 0.4143 | -0.2461 | -2.6298 | 0.8088 | 2.3838 | -60.1531 | -33.7891 | -1.3940 | -1.3910 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.0.0+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
chongdashu/007-alpaca-gemma-7b-merged-16bit
chongdashu
2024-05-23T18:24:23Z
7
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "base_model:finetune:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T18:18:54Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-7b-bnb-4bit --- # Uploaded model - **Developed by:** chongdashu - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Dmitry282/test1
Dmitry282
2024-05-23T18:24:13Z
0
0
null
[ "ru", "license:apache-2.0", "region:us" ]
null
2024-05-23T18:23:20Z
--- license: apache-2.0 language: - ru ---
14rohnisingh/autotrain-9bnwq-plawx
14rohnisingh
2024-05-23T18:22:24Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T18:22:13Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
hgnoi/rQxglXT9wkGMvNul
hgnoi
2024-05-23T18:21:50Z
135
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T18:20:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bert-base/fun_trained_convbert_epoch_5
bert-base
2024-05-23T18:18:56Z
167
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-23T18:15:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alejandrovil/llama3-AWQ
alejandrovil
2024-05-23T18:15:11Z
78
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "license:apache-2.0", "text-generation-inference", "awq", "region:us" ]
text-generation
2024-05-03T23:08:35Z
--- library_name: transformers tags: - 4-bit - AWQ - text-generation - autotrain_compatible - endpoints_compatible - Llama-3 - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode - axolotl model-index: - name: Hermes-2-Pro-Llama-3-8B results: [] license: apache-2.0 language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 Pro messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. pipeline_tag: text-generation inference: false quantized_by: Suparious --- # NousResearch/Hermes-2-Pro-Llama-3-8B AWQ - Original model: [Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/Hermes-2-Pro-Llama-3-8B-AWQ" system_message = "You are Hermes-2-Pro-Llama-3-8B, incarnated as a powerful AI. You were created by NousResearch." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
Apness/rururu
Apness
2024-05-23T18:13:18Z
90
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ru", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T18:10:24Z
--- language: - ru license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: Whisper Small Ru - BMSTU results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Ru - BMSTU This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.1876 - Cer: 3.9623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.1707 | 0.4924 | 1000 | 0.2238 | 4.8641 | | 0.1641 | 0.9847 | 2000 | 0.1984 | 4.1821 | | 0.0696 | 1.4771 | 3000 | 0.1921 | 4.1234 | | 0.0712 | 1.9695 | 4000 | 0.1876 | 3.9623 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.0.1+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1
SamariTanOG/Finetuned-BioMed
SamariTanOG
2024-05-23T18:06:09Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T18:05:45Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-v0.3-bnb-4bit --- # Uploaded model - **Developed by:** SamariTanOG - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
PhillipGuo/hp-lat-llama-No_PCA-epsilon1.5-pgd_layer16_24_30-def_layer0-ultrachat-towards1-away0-sft1-104
PhillipGuo
2024-05-23T18:00:59Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T18:00:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
antony-pk/Phi-3-mini-4k-instruct-ultrachat200k
antony-pk
2024-05-23T17:58:28Z
150
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "trl", "sft", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T17:53:51Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KKdu/MEDOVUCHA-gpt2-Q4_K_M-GGUF
KKdu
2024-05-23T17:49:38Z
1
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "endpoints_compatible", "region:us" ]
null
2024-05-23T17:49:36Z
--- library_name: transformers tags: - llama-cpp - gguf-my-repo --- # KKdu/MEDOVUCHA-gpt2-Q4_K_M-GGUF This model was converted to GGUF format from [`KKdu/MEDOVUCHA-gpt2`](https://huggingface.co/KKdu/MEDOVUCHA-gpt2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/KKdu/MEDOVUCHA-gpt2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo KKdu/MEDOVUCHA-gpt2-Q4_K_M-GGUF --model medovucha-gpt2.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo KKdu/MEDOVUCHA-gpt2-Q4_K_M-GGUF --model medovucha-gpt2.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m medovucha-gpt2.Q4_K_M.gguf -n 128 ```
nayaniiii/whisper-small-punjabi
nayaniiii
2024-05-23T17:47:48Z
77
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "pa", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T15:17:51Z
--- language: - pa license: apache-2.0 tags: - generated_from_trainer base_model: openai/whisper-small datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small Punjabi - Nayani Jindal results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: pa-IN split: None args: 'config: mr, split: test' metrics: - type: wer value: 49.85875706214689 name: Wer --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Punjabi - Nayani Jindal This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4789 - Wer: 49.8588 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:-------:| | 0.0003 | 16.3934 | 1000 | 0.4789 | 49.8588 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.1.2 - Datasets 2.19.1 - Tokenizers 0.19.1
LoneStriker/aya-23-35B-6.0bpw-h6-exl2
LoneStriker
2024-05-23T17:47:38Z
7
0
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "el", "fa", "pl", "id", "cs", "he", "hi", "nl", "ro", "ru", "tr", "uk", "vi", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2024-05-23T17:34:26Z
--- library_name: transformers language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi license: cc-by-nc-4.0 --- # Model Card for Aya-23-35B ## Model Summary Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages. This model card corresponds to the 35-billion version of the Aya 23 model. We also released an 8-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-8B). We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: aya-23-8B - Model Size: 35 billion parameters **Try Aya 23** You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23). ### Usage Please install transformers from the source repository that includes the necessary changes for this model ```python # pip install 'git+https://github.com/huggingface/transformers.git' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/aya-23-35B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-plus chat template messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ### Example Notebook [This notebook](https://huggingface.co/CohereForAI/aya-23-35B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes). ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: Aya-23-35B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions. **Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese **Context length**: 8192 ### Evaluation <img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Please refer to the [Aya 23 technical report](https://drive.google.com/file/d/1YKBPo61pnl97C1c_1C2ZVOnPhqf7MLSc/view) for further details about the base model, data, instruction tuning, and evaluation. ### Model Card Contact For errors or additional questions about details in this model card, contact [email protected]. ### Terms of Use We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try the model today You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23). ### Citation info ```bibtex @misc{aya23technicalreport, title={Aya 23: Open Weight Releases to Further Multilingual Progress}, author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker}, url={https://drive.google.com/file/d/1YKBPo61pnl97C1c_1C2ZVOnPhqf7MLSc/view}, year={2024} }
hgnoi/FsC7wABLoMmX9RNp
hgnoi
2024-05-23T17:35:10Z
133
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T17:33:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fearlessdots/WizardLM-2-7B-abliterated-GGUF
fearlessdots
2024-05-23T17:34:32Z
43
8
null
[ "gguf", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T16:57:24Z
--- license: apache-2.0 --- # WizardLM-2-7B-abliterated This is the **WizardLM-2-7B** model with orthogonalized bfloat16 safetensor weights, based on the implementation by `@failspy`. For more info: - Original paper preview presenting the methodology: <https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction> - Jupyter notebook containing a implementation of the methodology, by `@failspy`: <https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb> ## Prompt Template This model uses the prompt format from **Vicuna** and supports **multi-turn** conversation. --- # Original model card: <p style="font-size:20px;" align="center"> 🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p> <p align="center"> 🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## News 🔥🔥🔥 [2024/04/15] We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper. ## Model Details * **Model name**: WizardLM-2 7B * **Developed by**: WizardLM@Microsoft AI * **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * **Parameters**: 7B * **Language(s)**: Multilingual * **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2) * **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM) * **Paper**: WizardLM-2 (Upcoming) * **License**: Apache2.0 ## Model Capacities **MT-Bench** We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> **Human Preferences Evaluation** We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage ❗<b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s> USER: Who are you? ASSISTANT: I am WizardLM.</s>...... ``` <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
bert-base/fun_trained_convbert_epoch_4
bert-base
2024-05-23T17:34:12Z
184
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-23T17:33:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GENIAC-Team-Ozaki/full-sft-finetuned-stage4-iter86000-v2
GENIAC-Team-Ozaki
2024-05-23T17:29:36Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T17:24:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LoneStriker/aya-23-35B-4.65bpw-h6-exl2
LoneStriker
2024-05-23T17:22:55Z
7
0
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "el", "fa", "pl", "id", "cs", "he", "hi", "nl", "ro", "ru", "tr", "uk", "vi", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-23T17:12:04Z
--- library_name: transformers language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi license: cc-by-nc-4.0 --- # Model Card for Aya-23-35B ## Model Summary Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages. This model card corresponds to the 35-billion version of the Aya 23 model. We also released an 8-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-8B). We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: aya-23-8B - Model Size: 35 billion parameters **Try Aya 23** You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23). ### Usage Please install transformers from the source repository that includes the necessary changes for this model ```python # pip install 'git+https://github.com/huggingface/transformers.git' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/aya-23-35B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-plus chat template messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ### Example Notebook [This notebook](https://huggingface.co/CohereForAI/aya-23-35B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes). ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: Aya-23-35B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions. **Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese **Context length**: 8192 ### Evaluation <img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Please refer to the [Aya 23 technical report](https://drive.google.com/file/d/1YKBPo61pnl97C1c_1C2ZVOnPhqf7MLSc/view) for further details about the base model, data, instruction tuning, and evaluation. ### Model Card Contact For errors or additional questions about details in this model card, contact [email protected]. ### Terms of Use We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try the model today You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23). ### Citation info ```bibtex @misc{aya23technicalreport, title={Aya 23: Open Weight Releases to Further Multilingual Progress}, author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker}, url={https://drive.google.com/file/d/1YKBPo61pnl97C1c_1C2ZVOnPhqf7MLSc/view}, year={2024} }
Raidenv/autotrain-1igto-kytcv
Raidenv
2024-05-23T17:22:20Z
0
0
null
[ "dataset:cifar10", "region:us" ]
null
2024-05-23T16:44:16Z
--- datasets: - cifar10 metrics: - accuracy ---
devjwsong/ppo-CartPole-v1
devjwsong
2024-05-23T17:19:45Z
0
0
null
[ "tensorboard", "CartPole-v1", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2024-05-23T17:19:38Z
--- tags: - CartPole-v1 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 193.40 +/- 54.35 name: mean_reward verified: false --- # PPO Agent Playing CartPole-v1 This is a trained model of a PPO agent playing CartPole-v1. # Hyperparameters ```python {'exp_name': 'Carpole' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'CartPole-v1' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'devjwsong/ppo-CartPole-v1' 'batch_size': 512 'minibatch_size': 128} ```
khof312/mms-tts-swh-female-1
khof312
2024-05-23T17:17:13Z
114
0
transformers
[ "transformers", "safetensors", "vits", "text-to-audio", "mms", "mms-tts-swh", "audio", "text-to-speech", "sw", "dataset:mozilla-foundation/common_voice_16_1", "endpoints_compatible", "region:us" ]
text-to-speech
2024-04-25T20:14:21Z
--- datasets: - mozilla-foundation/common_voice_16_1 language: - sw pipeline_tag: text-to-speech tags: - mms - vits - mms-tts-swh - audio ---
abhi-8/Age-gender-predictor
abhi-8
2024-05-23T17:16:16Z
0
1
null
[ "code", "video", "photo", "age", "gender", "image-text-to-text", "en", "dataset:nlphuji/utk_faces", "dataset:cledoux42/AGEGENDERETHINCITY", "license:mit", "region:us" ]
image-text-to-text
2024-05-23T17:07:42Z
--- license: mit datasets: - nlphuji/utk_faces - cledoux42/AGEGENDERETHINCITY language: - en metrics: - accuracy - mae - mse pipeline_tag: image-text-to-text tags: - code - video - photo - age - gender ---
ZaneHorrible/Adam_ViTL-16_224-2e-4-batch_16_epoch_4_classes_24
ZaneHorrible
2024-05-23T17:15:21Z
197
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-large-patch16-224", "base_model:finetune:google/vit-large-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-23T15:06:08Z
--- license: apache-2.0 base_model: google/vit-large-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: Adam_ViTL-16_224-2e-4-batch_16_epoch_4_classes_24 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9683908045977011 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Adam_ViTL-16_224-2e-4-batch_16_epoch_4_classes_24 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1211 - Accuracy: 0.9684 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6934 | 0.07 | 100 | 0.4138 | 0.8477 | | 0.4815 | 0.14 | 200 | 0.7227 | 0.8103 | | 0.3952 | 0.21 | 300 | 0.5867 | 0.8491 | | 0.6095 | 0.28 | 400 | 0.5975 | 0.8448 | | 0.3448 | 0.35 | 500 | 0.4000 | 0.8721 | | 0.2604 | 0.42 | 600 | 0.3335 | 0.9080 | | 0.3734 | 0.49 | 700 | 0.4264 | 0.875 | | 0.3074 | 0.56 | 800 | 0.3634 | 0.8908 | | 0.312 | 0.63 | 900 | 0.4347 | 0.875 | | 0.1076 | 0.7 | 1000 | 0.3203 | 0.9052 | | 0.2001 | 0.77 | 1100 | 0.2668 | 0.9224 | | 0.0507 | 0.84 | 1200 | 0.2265 | 0.9353 | | 0.0767 | 0.91 | 1300 | 0.2797 | 0.9239 | | 0.3201 | 0.97 | 1400 | 0.2977 | 0.9109 | | 0.0293 | 1.04 | 1500 | 0.2849 | 0.9239 | | 0.0353 | 1.11 | 1600 | 0.2918 | 0.9339 | | 0.0787 | 1.18 | 1700 | 0.3012 | 0.9253 | | 0.0749 | 1.25 | 1800 | 0.2383 | 0.9454 | | 0.0233 | 1.32 | 1900 | 0.3272 | 0.9152 | | 0.1635 | 1.39 | 2000 | 0.2857 | 0.9124 | | 0.0586 | 1.46 | 2100 | 0.3785 | 0.9109 | | 0.0103 | 1.53 | 2200 | 0.2032 | 0.9468 | | 0.0082 | 1.6 | 2300 | 0.2091 | 0.9397 | | 0.0695 | 1.67 | 2400 | 0.1739 | 0.9526 | | 0.0253 | 1.74 | 2500 | 0.2056 | 0.9511 | | 0.0648 | 1.81 | 2600 | 0.1803 | 0.9526 | | 0.0286 | 1.88 | 2700 | 0.2018 | 0.9440 | | 0.0057 | 1.95 | 2800 | 0.2332 | 0.9483 | | 0.0056 | 2.02 | 2900 | 0.3459 | 0.9267 | | 0.0111 | 2.09 | 3000 | 0.1954 | 0.9540 | | 0.0001 | 2.16 | 3100 | 0.1586 | 0.9626 | | 0.0059 | 2.23 | 3200 | 0.1716 | 0.9526 | | 0.0063 | 2.3 | 3300 | 0.1548 | 0.9612 | | 0.0003 | 2.37 | 3400 | 0.1813 | 0.9569 | | 0.0006 | 2.44 | 3500 | 0.1339 | 0.9626 | | 0.0004 | 2.51 | 3600 | 0.1492 | 0.9583 | | 0.0004 | 2.58 | 3700 | 0.1238 | 0.9698 | | 0.0001 | 2.65 | 3800 | 0.1156 | 0.9713 | | 0.0001 | 2.72 | 3900 | 0.1272 | 0.9684 | | 0.0 | 2.79 | 4000 | 0.1303 | 0.9698 | | 0.0001 | 2.86 | 4100 | 0.1269 | 0.9684 | | 0.0001 | 2.92 | 4200 | 0.1209 | 0.9684 | | 0.0 | 2.99 | 4300 | 0.1211 | 0.9684 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
LoneStriker/aya-23-35B-4.0bpw-h6-exl2
LoneStriker
2024-05-23T17:12:02Z
8
1
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "el", "fa", "pl", "id", "cs", "he", "hi", "nl", "ro", "ru", "tr", "uk", "vi", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
text-generation
2024-05-23T17:02:24Z
--- library_name: transformers language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi license: cc-by-nc-4.0 --- # Model Card for Aya-23-35B ## Model Summary Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages. This model card corresponds to the 35-billion version of the Aya 23 model. We also released an 8-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-8B). We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: aya-23-8B - Model Size: 35 billion parameters **Try Aya 23** You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23). ### Usage Please install transformers from the source repository that includes the necessary changes for this model ```python # pip install 'git+https://github.com/huggingface/transformers.git' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/aya-23-35B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-plus chat template messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ### Example Notebook [This notebook](https://huggingface.co/CohereForAI/aya-23-35B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes). ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: Aya-23-35B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions. **Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese **Context length**: 8192 ### Evaluation <img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Please refer to the [Aya 23 technical report](https://drive.google.com/file/d/1YKBPo61pnl97C1c_1C2ZVOnPhqf7MLSc/view) for further details about the base model, data, instruction tuning, and evaluation. ### Model Card Contact For errors or additional questions about details in this model card, contact [email protected]. ### Terms of Use We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try the model today You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23). ### Citation info ```bibtex @misc{aya23technicalreport, title={Aya 23: Open Weight Releases to Further Multilingual Progress}, author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker}, url={https://drive.google.com/file/d/1YKBPo61pnl97C1c_1C2ZVOnPhqf7MLSc/view}, year={2024} }
adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1
adalbertojunior
2024-05-23T17:11:19Z
267
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-30T00:41:59Z
--- library_name: transformers tags: [] model-index: - name: Qwen1.5-32B-Dolphin-Portuguese-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: ENEM Challenge (No Images) type: eduagarcia/enem_challenge split: train args: num_few_shot: 3 metrics: - type: acc value: 74.74 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BLUEX (No Images) type: eduagarcia-temp/BLUEX_without_images split: train args: num_few_shot: 3 metrics: - type: acc value: 66.34 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: OAB Exams type: eduagarcia/oab_exams split: train args: num_few_shot: 3 metrics: - type: acc value: 53.71 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 RTE type: assin2 split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 93.66 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 STS type: eduagarcia/portuguese_benchmark split: test args: num_few_shot: 15 metrics: - type: pearson value: 77.7 name: pearson source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: FaQuAD NLI type: ruanchaves/faquad-nli split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 82.14 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HateBR Binary type: ruanchaves/hatebr split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 86.71 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: PT Hate Speech Binary type: hate_speech_portuguese split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 68.68 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: tweetSentBR type: eduagarcia/tweetsentbr_fewshot split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 72.82 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1 name: Open Portuguese LLM Leaderboard --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] # Open Portuguese LLM Leaderboard Evaluation Results Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard) | Metric | Value | |--------------------------|---------| |Average |**75.17**| |ENEM Challenge (No Images)| 74.74| |BLUEX (No Images) | 66.34| |OAB Exams | 53.71| |Assin2 RTE | 93.66| |Assin2 STS | 77.70| |FaQuAD NLI | 82.14| |HateBR Binary | 86.71| |PT Hate Speech Binary | 68.68| |tweetSentBR | 72.82|
CXDuncan/whisper-small-ml-ct2
CXDuncan
2024-05-23T17:10:15Z
4
0
transformers
[ "transformers", "ml", "dataset:CXDuncan/Malayalam-IndicVoices", "endpoints_compatible", "region:us" ]
null
2024-05-23T14:54:06Z
--- datasets: - CXDuncan/Malayalam-IndicVoices language: - ml ---
quannv/ATP
quannv
2024-05-23T17:05:04Z
0
0
null
[ "pytorch", "license:apache-2.0", "region:us" ]
null
2024-05-17T16:11:43Z
--- license: apache-2.0 ---
kaleinaNyan/kolibri-mistral-0427-upd
kaleinaNyan
2024-05-23T17:00:40Z
7
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "ru", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T16:48:12Z
--- license: apache-2.0 language: - en - ru --- ## Description This is an instruction following model (based on Mistral v0.1 Base) optimized for Russian language. It was trained using [kolibrify](https://github.com/oKatanaaa/kolibrify) on a multitude of instruction datasets. The model uses ChatML template. It was trained to be sensitive to the system prompt, experiment with it. Currently in pre-alpha, later releases will include more details regarding training procedure and data mix. > [!NOTE] > This model is an improved version of older kolibri-mistral-0427. ## Instruction following evals The model was tested using the following benchmarks: - [ruIFEval](https://github.com/NLP-Core-Team/ruIFEval) - [ifeval](https://github.com/google-research/google-research/tree/master/instruction_following_eval) | Eval name |Strict Value| Loose Value |---------------------------------|----|----| |Avg. |*53.81*|*56.57*| |ifeval-prompt-level |52.68|56.19| |ifeval-instruction-level |62.82|66.18| |ru-ifeval-prompt-level |44.36|46.39| |ru-ifeval-instruction-level |55.39|57.55|
kaleinaNyan/kolibri-mistral-0427-upd.gguf
kaleinaNyan
2024-05-23T16:59:39Z
13
0
transformers
[ "transformers", "gguf", "mistral", "en", "ru", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-22T10:45:41Z
--- license: apache-2.0 language: - en - ru --- ## Description This is an instruction following model (based on Mistral v0.1 Base) optimized for Russian language. It was trained using [kolibrify](https://github.com/oKatanaaa/kolibrify) on a multitude of instruction datasets. The model uses ChatML template. It was trained to be sensitive to the system prompt, experiment with it. I recommend using the model with LMStudio. Currently in pre-alpha, later releases will include more details regarding training procedure and data mix. > [!NOTE] > This model is an improved version of older kolibri-mistral-0427. ## Instruction following evals The model was tested using the following benchmarks: - [ruIFEval](https://github.com/NLP-Core-Team/ruIFEval) - [ifeval](https://github.com/google-research/google-research/tree/master/instruction_following_eval) | Eval name |Strict Value| Loose Value |---------------------------------|----|----| |Avg. |*53.81*|*56.57*| |ifeval-prompt-level |52.68|56.19| |ifeval-instruction-level |62.82|66.18| |ru-ifeval-prompt-level |44.36|46.39| |ru-ifeval-instruction-level |55.39|57.55|
DZoo/ppo-Huggy
DZoo
2024-05-23T16:57:16Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-05-23T16:56:16Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: DZoo/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
satish860/hc-tinyllama-alpaca
satish860
2024-05-23T16:55:03Z
2
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2024-05-23T16:51:15Z
--- license: apache-2.0 library_name: peft tags: - axolotl - generated_from_trainer base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T model-index: - name: hc-tinyllama-alpaca results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer load_in_8bit: false load_in_4bit: true strict: false datasets: - path: imdb_1k_alpaca.jsonl type: alpaca dataset_prepared_path: last_run_prepared val_set_size: 0.05 output_dir: ./outputs/qlora-out hub_model_id: satish860/hc-tinyllama-alpaca adapter: qlora lora_model_dir: sequence_len: 4096 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 4 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: ``` </details><br> # hc-tinyllama-alpaca This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0225 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 8.0151 | 0.0851 | 1 | 7.9045 | | 7.967 | 0.2553 | 3 | 7.7350 | | 6.5633 | 0.5106 | 6 | 4.8012 | | 1.5838 | 0.7660 | 9 | 0.7756 | | 0.2319 | 1.0213 | 12 | 0.1592 | | 0.0669 | 1.2340 | 15 | 0.0973 | | 0.0344 | 1.4894 | 18 | 0.0453 | | 0.1146 | 1.7447 | 21 | 0.0754 | | 0.0896 | 2.0 | 24 | 0.0517 | | 0.0293 | 2.2340 | 27 | 0.0486 | | 0.0378 | 2.4894 | 30 | 0.0566 | | 0.0523 | 2.7447 | 33 | 0.0270 | | 0.0886 | 3.0 | 36 | 0.0226 | | 0.0504 | 3.2128 | 39 | 0.0232 | | 0.089 | 3.4681 | 42 | 0.0225 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.1.2+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1
nccsnlp/Meta-Llama-3-8B_ct_prompt1b_ft200_v1_merged
nccsnlp
2024-05-23T16:53:09Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T16:48:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bferrando/git-base-naruto2
bferrando
2024-05-23T16:47:06Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "git", "image-text-to-text", "generated_from_trainer", "base_model:microsoft/git-base", "base_model:finetune:microsoft/git-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-05-23T15:55:19Z
--- license: mit base_model: microsoft/git-base tags: - generated_from_trainer model-index: - name: git-base-naruto2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-naruto2 This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0286 - Wer Score: 0.3515 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 3 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Score | |:-------------:|:------:|:----:|:---------------:|:---------:| | 10.2502 | 0.0909 | 5 | 9.1485 | 62.2485 | | 8.8428 | 0.1818 | 10 | 8.3401 | 80.6545 | | 8.1981 | 0.2727 | 15 | 7.8041 | 80.8121 | | 7.7018 | 0.3636 | 20 | 7.3280 | 80.3636 | | 7.2331 | 0.4545 | 25 | 6.8603 | 40.0485 | | 6.7819 | 0.5455 | 30 | 6.3932 | 20.9152 | | 6.3079 | 0.6364 | 35 | 5.9241 | 22.0424 | | 5.8546 | 0.7273 | 40 | 5.4566 | 6.7515 | | 5.3926 | 0.8182 | 45 | 4.9938 | 6.9636 | | 4.9258 | 0.9091 | 50 | 4.5336 | 6.8909 | | 4.4733 | 1.0 | 55 | 4.0808 | 7.4061 | | 4.0241 | 1.0909 | 60 | 3.6383 | 6.9273 | | 3.5895 | 1.1818 | 65 | 3.2081 | 7.2121 | | 3.1657 | 1.2727 | 70 | 2.7929 | 7.2121 | | 2.7513 | 1.3636 | 75 | 2.3954 | 7.8303 | | 2.3667 | 1.4545 | 80 | 2.0232 | 8.1576 | | 1.9959 | 1.5455 | 85 | 1.6770 | 8.8727 | | 1.6518 | 1.6364 | 90 | 1.3655 | 9.2242 | | 1.349 | 1.7273 | 95 | 1.0882 | 9.3758 | | 1.081 | 1.8182 | 100 | 0.8536 | 9.1939 | | 0.8455 | 1.9091 | 105 | 0.6599 | 8.8667 | | 0.6619 | 2.0 | 110 | 0.5056 | 9.2727 | | 0.5017 | 2.0909 | 115 | 0.3868 | 9.3636 | | 0.385 | 2.1818 | 120 | 0.2934 | 10.9758 | | 0.2916 | 2.2727 | 125 | 0.2254 | 10.6848 | | 0.2292 | 2.3636 | 130 | 0.1742 | 9.3758 | | 0.1773 | 2.4545 | 135 | 0.1368 | 8.8909 | | 0.1397 | 2.5455 | 140 | 0.1088 | 8.4364 | | 0.1214 | 2.6364 | 145 | 0.0907 | 0.4121 | | 0.0964 | 2.7273 | 150 | 0.0764 | 0.3939 | | 0.0812 | 2.8182 | 155 | 0.0649 | 0.3818 | | 0.07 | 2.9091 | 160 | 0.0597 | 0.3818 | | 0.0613 | 3.0 | 165 | 0.0516 | 0.4121 | | 0.0454 | 3.0909 | 170 | 0.0472 | 0.3879 | | 0.0492 | 3.1818 | 175 | 0.0422 | 0.4 | | 0.0411 | 3.2727 | 180 | 0.0411 | 0.4364 | | 0.035 | 3.3636 | 185 | 0.0394 | 0.4303 | | 0.0378 | 3.4545 | 190 | 0.0370 | 0.3879 | | 0.0389 | 3.5455 | 195 | 0.0348 | 0.3939 | | 0.0341 | 3.6364 | 200 | 0.0335 | 0.3636 | | 0.0391 | 3.7273 | 205 | 0.0327 | 0.3697 | | 0.0266 | 3.8182 | 210 | 0.0314 | 0.5212 | | 0.0282 | 3.9091 | 215 | 0.0308 | 2.6364 | | 0.0306 | 4.0 | 220 | 0.0300 | 0.4848 | | 0.0263 | 4.0909 | 225 | 0.0306 | 0.3758 | | 0.0237 | 4.1818 | 230 | 0.0300 | 0.3697 | | 0.0255 | 4.2727 | 235 | 0.0292 | 0.3515 | | 0.0232 | 4.3636 | 240 | 0.0290 | 0.3576 | | 0.024 | 4.4545 | 245 | 0.0291 | 0.3515 | | 0.0243 | 4.5455 | 250 | 0.0294 | 0.3636 | | 0.0245 | 4.6364 | 255 | 0.0296 | 0.3697 | | 0.022 | 4.7273 | 260 | 0.0294 | 0.3576 | | 0.0228 | 4.8182 | 265 | 0.0291 | 0.3576 | | 0.0255 | 4.9091 | 270 | 0.0287 | 0.3515 | | 0.025 | 5.0 | 275 | 0.0286 | 0.3515 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
cccmatthew/model_test
cccmatthew
2024-05-23T16:45:33Z
107
0
transformers
[ "transformers", "pytorch", "distilbert", "token-classification", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-23T16:40:25Z
--- license: apache-2.0 language: - en metrics: - f1 ---
lmstudio-community/aya-23-8B-GGUF
lmstudio-community
2024-05-23T16:43:17Z
2,968
6
transformers
[ "transformers", "gguf", "text-generation", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "el", "fa", "pl", "id", "cs", "he", "hi", "nl", "ro", "ru", "tr", "uk", "vi", "base_model:CohereForAI/aya-23-8B", "base_model:quantized:CohereForAI/aya-23-8B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-23T16:02:11Z
--- library_name: transformers language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi license: cc-by-nc-4.0 quantized_by: bartowski pipeline_tag: text-generation lm_studio: param_count: 8b use_case: general release_date: 23-05-2024 model_creator: CohereForAI prompt_template: Cohere Command R system_prompt: You are a helpful AI assistant base_model: cohere original_repo: CohereForAI/aya-23-8B base_model: CohereForAI/aya-23-8B --- ## 💫 Community Model> Aya 23 8B by Cohere For AI *👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*. **Model creator:** [Cohere For AI](https://huggingface.co/CohereForAI)<br> **Original model**: [aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B)<br> **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2965](https://github.com/ggerganov/llama.cpp/releases/tag/b2965)<br> ## Model Summary: Aya 23 are brand new instruction tuned multilingual models from Cohere. This model should perform well at logic across a wide variety of languages.<br> This is the 8B version of the model, and seems to perform well for its size across multi-lingual benchmarks. ## Prompt template: Choose the `Cohere Command R` preset in your LM Studio. Under the hood, the model will see a prompt that's formatted like so: ``` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|> {system_prompt} <|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|> {prompt} <|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ``` ## Technical Details Aya 23 covers the following languages: - Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese The Aya training dataset can be found here: - https://huggingface.co/datasets/CohereForAI/aya_collection More technical details can be found from Cohere [here](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. 🙏 Special thanks to [Kalomaze](https://github.com/kalomaze), [Dampf](https://github.com/Dampfinchen) and [turboderp](https://github.com/turboderp/) for their work on the dataset (linked [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)) that was used for calculating the imatrix for all sizes. ## Disclaimers LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
bsmadhu/my_awesome_qa_model
bsmadhu
2024-05-23T16:42:07Z
63
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-05-23T16:29:43Z
--- license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: bsmadhu/my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bsmadhu/my_awesome_qa_model This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5903 - Validation Loss: 1.8578 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.4043 | 2.2286 | 0 | | 1.8482 | 1.8578 | 1 | | 1.5903 | 1.8578 | 2 | ### Framework versions - Transformers 4.41.0 - TensorFlow 2.15.0 - Datasets 2.19.1 - Tokenizers 0.19.1
ymechqrane/llama3-8b-sft-qlora-regex_v240524
ymechqrane
2024-05-23T16:41:16Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
null
2024-05-23T16:28:13Z
--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B model-index: - name: llama3-8b-sft-qlora-regex_v240524 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-8b-sft-qlora-regex_v240524 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
hyojin99/whisper-large
hyojin99
2024-05-23T16:40:29Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T16:40:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hyojin99/whisper_large
hyojin99
2024-05-23T16:40:27Z
76
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "ko", "dataset:hyojin99/EBRC", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T02:51:49Z
--- language: - ko license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer base_model: openai/whisper-small datasets: - hyojin99/EBRC model-index: - name: ft_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ft_model This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the EBRC dataset. It achieves the following results on the evaluation set: - Loss: 0.2978 - Cer: 10.5708 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 6000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.3829 | 1.0 | 1500 | 0.3817 | 15.4574 | | 0.1779 | 2.0 | 3000 | 0.3238 | 13.5614 | | 0.0732 | 3.0 | 4500 | 0.2954 | 11.2004 | | 0.0228 | 4.0 | 6000 | 0.2978 | 10.5708 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
AmrutaMuthal/mero_controlnet_inpaint_obj_segmask_with_warmup
AmrutaMuthal
2024-05-23T16:38:11Z
3
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2024-05-17T07:02:20Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ShahlaDnshi96/mobile_mistral_3
ShahlaDnshi96
2024-05-23T16:32:21Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-07T09:18:57Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 datasets: - generator model-index: - name: mobile_mistral_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobile_mistral_3 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
MrezaPRZ/codellama_high_quality_sft_postgres
MrezaPRZ
2024-05-23T16:30:55Z
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T16:26:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hgnoi/nsmGBNKmOwJRJkSo
hgnoi
2024-05-23T16:26:07Z
131
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T16:24:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GENIAC-Team-Ozaki/lora-dpo-finetuned-stage4-full-sft-0.1_1e-6_ep-1
GENIAC-Team-Ozaki
2024-05-23T16:24:43Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T05:48:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ehristoforu/BoW-v1-768px
ehristoforu
2024-05-23T16:21:33Z
4
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "BoW", "sd2.1-768px", "sd2", "realism", "art", "anime", "merge", "text-to-image", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-02-19T20:23:20Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image inference: false tags: [safetensors, stable-diffusion, BoW, sd2.1-768px, sd2, realism, art, anime, merge] ---
IMFisa/mukadas_quantized_model
IMFisa
2024-05-23T16:15:30Z
2
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T16:12:32Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** IMFisa - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mika5883/inverse_gec_finetuned
mika5883
2024-05-23T16:09:21Z
114
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:mika5883/inverse_gec", "base_model:finetune:mika5883/inverse_gec", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-23T15:06:00Z
--- base_model: mika5883/inverse_gec tags: - generated_from_trainer metrics: - bleu model-index: - name: inverse_gec_finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # inverse_gec_finetuned This model is a fine-tuned version of [mika5883/inverse_gec](https://huggingface.co/mika5883/inverse_gec) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4594 - Bleu: 82.2574 - Gen Len: 22.4784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 40 | 0.3337 | 84.782 | 22.4412 | | No log | 2.0 | 80 | 0.2966 | 84.9348 | 22.4536 | | No log | 3.0 | 120 | 0.2708 | 85.0786 | 22.4484 | | No log | 4.0 | 160 | 0.2561 | 85.1691 | 22.4416 | | No log | 5.0 | 200 | 0.2517 | 85.2139 | 22.44 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
vonvolous/tattoo_oldschool_LoRA
vonvolous
2024-05-23T16:08:16Z
11
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-23T14:52:32Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: In the style of TOK tattoo widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - vonvolous/tattoo_oldschool_LoRA <Gallery /> ## Model description These are vonvolous/tattoo_oldschool_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use In the style of TOK tattoo to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](vonvolous/tattoo_oldschool_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
AI-Trailblazer/ddpm-celebahq-finetuned-butterflies-2epochs
AI-Trailblazer
2024-05-23T16:07:31Z
44
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2024-05-23T16:07:16Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) Describe your model here ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('AI-Trailblazer/ddpm-celebahq-finetuned-butterflies-2epochs') image = pipeline().images[0] image ```
hgnoi/xnFYq3Pcj03yo6K4
hgnoi
2024-05-23T16:03:20Z
131
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T16:01:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jaman/gemma-Code-Instruct-Finetune-test
Jaman
2024-05-23T16:02:00Z
105
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T15:51:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VictorSanh/idefics2-8b-docvqa-finetuned-tutorial
VictorSanh
2024-05-23T15:59:36Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:HuggingFaceM4/idefics2-8b", "base_model:finetune:HuggingFaceM4/idefics2-8b", "license:apache-2.0", "region:us" ]
null
2024-04-12T20:58:26Z
--- license: apache-2.0 base_model: HuggingFaceM4/idefics2-8b tags: - generated_from_trainer model-index: - name: idefics2-8b-docvqa-finetuned-tutorial results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # idefics2-8b-docvqa-finetuned-tutorial This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
mmnga/Mistral-7B-Instruct-v0.3-gguf
mmnga
2024-05-23T15:58:46Z
815
2
null
[ "gguf", "mistral", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-05-23T14:44:25Z
--- license: apache-2.0 language: - en - ja datasets: - TFMC/imatrix-dataset-for-japanese-llm tags: - mistral --- # Mistral-7B-Instruct-v0.3-gguf [mistralaiさんが公開しているMistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)のggufフォーマット変換版です。 imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。 ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'Mistral-7B-Instruct-v0.3-Q4_0.gguf' -n 128 -p '[INST] 今晩の夕食のレシピを教え て [/INST] ' ```
chchen/Mistral-7B-Instruct-v0.3-ORPO
chchen
2024-05-23T15:48:59Z
2
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "trl", "dpo", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3", "license:apache-2.0", "region:us" ]
null
2024-05-23T08:38:42Z
--- license: apache-2.0 library_name: peft tags: - llama-factory - lora - trl - dpo - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.3 model-index: - name: Mistral-7B-Instruct-v0.3-ORPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-Instruct-v0.3-ORPO This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the dpo_mix_en dataset. It achieves the following results on the evaluation set: - Loss: 0.8734 - Rewards/chosen: -0.0810 - Rewards/rejected: -0.1017 - Rewards/accuracies: 0.5720 - Rewards/margins: 0.0208 - Logps/rejected: -1.0175 - Logps/chosen: -0.8098 - Logits/rejected: -3.1455 - Logits/chosen: -3.1171 - Sft Loss: 0.8098 - Odds Ratio Loss: 0.6360 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 0.1 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Sft Loss | Odds Ratio Loss | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:---------------:| | 0.9464 | 0.8891 | 500 | 0.8919 | -0.0828 | -0.1031 | 0.5690 | 0.0202 | -1.0306 | -0.8281 | -3.1432 | -3.1149 | 0.8281 | 0.6374 | | 0.8737 | 1.7782 | 1000 | 0.8774 | -0.0814 | -0.1019 | 0.5760 | 0.0205 | -1.0186 | -0.8136 | -3.1431 | -3.1139 | 0.8136 | 0.6371 | | 0.8923 | 2.6673 | 1500 | 0.8734 | -0.0810 | -0.1017 | 0.5720 | 0.0208 | -1.0175 | -0.8098 | -3.1455 | -3.1171 | 0.8098 | 0.6360 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.3.0 - Datasets 2.19.0 - Tokenizers 0.19.1
SpeechResearch/whisper-ft-normal
SpeechResearch
2024-05-23T15:46:06Z
147
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T15:45:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yasirchemmakh/log-rabbit-7b-lora
yasirchemmakh
2024-05-23T15:43:39Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:WhiteRabbitNeo/WhiteRabbitNeo-7B-v1.5a", "base_model:finetune:WhiteRabbitNeo/WhiteRabbitNeo-7B-v1.5a", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T15:43:31Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: WhiteRabbitNeo/WhiteRabbitNeo-7B-v1.5a --- # Uploaded model - **Developed by:** yasirchemmakh - **License:** apache-2.0 - **Finetuned from model :** WhiteRabbitNeo/WhiteRabbitNeo-7B-v1.5a This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)