modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-25 18:28:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
495 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-25 18:28:16
card
stringlengths
11
1.01M
LoneStriker/goliath-120b-2.4bpw-h6-exl2
LoneStriker
2023-12-27T05:29:22Z
11
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-15T17:24:52Z
--- license: llama2 language: - en pipeline_tag: conversational --- # Goliath 120B An auto-regressive causal LM created by combining 2x finetuned [Llama-2 70B](https://huggingface.co/meta-llama/llama-2-70b-hf) into one. Please check out the quantized formats provided by [@TheBloke](https:///huggingface.co/TheBloke) and [@Panchovix](https://huggingface.co/Panchovix): - [GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) (llama.cpp) - [GPTQ](https://huggingface.co/TheBloke/goliath-120b-GPTQ) (KoboldAI, TGW, Aphrodite) - [AWQ](https://huggingface.co/TheBloke/goliath-120b-AWQ) (TGW, Aphrodite, vLLM) - [Exllamav2](https://huggingface.co/Panchovix/goliath-120b-exl2) (TGW, KoboldAI) # Prompting Format Both Vicuna and Alpaca will work, but due the initial and final layers belonging primarily to Xwin, I expect Vicuna to work the best. # Merge process The models used in the merge are [Xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [Euryale](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B). The layer ranges used are as follows: ```yaml - range 0, 16 Xwin - range 8, 24 Euryale - range 17, 32 Xwin - range 25, 40 Euryale - range 33, 48 Xwin - range 41, 56 Euryale - range 49, 64 Xwin - range 57, 72 Euryale - range 65, 80 Xwin ``` # Screenshots ![image/png](https://cdn-uploads.huggingface.co/production/uploads/635567189c72a7e742f1419c/Cat8_Rimaz6Ni7YhQiiGB.png) # Benchmarks Coming soon. # Acknowledgements Credits goes to [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit). Special thanks to [@Undi95](https://huggingface.co/Undi95) for helping with the merge ratios.
Realgon/N_bert_twitterfin_padding80model
Realgon
2023-12-27T05:22:27Z
8
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-27T05:09:05Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: N_bert_twitterfin_padding80model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # N_bert_twitterfin_padding80model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0050 - Accuracy: 0.8894 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6072 | 1.0 | 597 | 0.3449 | 0.8748 | | 0.3281 | 2.0 | 1194 | 0.3080 | 0.8878 | | 0.2384 | 3.0 | 1791 | 0.3908 | 0.8894 | | 0.1552 | 4.0 | 2388 | 0.5590 | 0.8719 | | 0.1206 | 5.0 | 2985 | 0.6288 | 0.8802 | | 0.0501 | 6.0 | 3582 | 0.6952 | 0.8874 | | 0.0357 | 7.0 | 4179 | 0.7691 | 0.8827 | | 0.0314 | 8.0 | 4776 | 0.8138 | 0.8844 | | 0.0293 | 9.0 | 5373 | 0.8231 | 0.8886 | | 0.0199 | 10.0 | 5970 | 0.8076 | 0.8890 | | 0.0191 | 11.0 | 6567 | 0.8359 | 0.8903 | | 0.0071 | 12.0 | 7164 | 0.8779 | 0.8857 | | 0.0105 | 13.0 | 7761 | 0.9540 | 0.8874 | | 0.0068 | 14.0 | 8358 | 0.9292 | 0.8890 | | 0.0094 | 15.0 | 8955 | 0.9410 | 0.8903 | | 0.0065 | 16.0 | 9552 | 0.9804 | 0.8857 | | 0.0066 | 17.0 | 10149 | 0.9936 | 0.8878 | | 0.0025 | 18.0 | 10746 | 1.0035 | 0.8915 | | 0.0038 | 19.0 | 11343 | 1.0173 | 0.8878 | | 0.0024 | 20.0 | 11940 | 1.0050 | 0.8894 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
giangvlcs/textual_inversion_cat
giangvlcs
2023-12-27T05:22:25Z
11
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-24T17:14:18Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - giangvlcs/textual_inversion_cat These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
TingTing0104/distilbert-base-uncased-finetuned-tweet_hate
TingTing0104
2023-12-27T05:20:28Z
1
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-27T05:03:48Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-tweet_hate results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: hate metrics: - name: Accuracy type: accuracy value: 0.77 - name: F1 type: f1 value: 0.7711956429754464 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-tweet_hate This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.6390 - Accuracy: 0.77 - F1: 0.7712 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5003 | 1.0 | 282 | 0.4716 | 0.76 | 0.7613 | | 0.3428 | 2.0 | 564 | 0.4767 | 0.771 | 0.7721 | | 0.2559 | 3.0 | 846 | 0.5256 | 0.778 | 0.7789 | | 0.1811 | 4.0 | 1128 | 0.5839 | 0.774 | 0.7748 | | 0.134 | 5.0 | 1410 | 0.6390 | 0.77 | 0.7712 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.1.0+cu121 - Datasets 1.16.1 - Tokenizers 0.15.0
xdecoder/X-Decoder
xdecoder
2023-12-27T05:18:11Z
0
5
null
[ "license:apache-2.0", "region:us" ]
null
2022-12-22T05:45:48Z
--- license: apache-2.0 --- ***Click to Download!*** ## -> Models *Focal-T:* <br/> [xdecoder_focalt_last_novg.pt](https://huggingface.co/xdecoder/X-Decoder/resolve/main/xdecoder_focalt_last_novg.pt) <br/> [xdecoder_focalt_last.pt](https://huggingface.co/xdecoder/X-Decoder/resolve/main/xdecoder_focalt_last.pt) <br/> [xdecoder_focalt_best_openseg.pt](https://huggingface.co/xdecoder/X-Decoder/resolve/main/xdecoder_focalt_best_openseg.pt) <br/> *Focal-L:* <br/> [xdecoder_focall_last.pt](https://huggingface.co/xdecoder/X-Decoder/resolve/main/xdecoder_focall_last.pt) <br/> [xdecoder_focall_bestseg.pt](https://huggingface.co/xdecoder/X-Decoder/resolve/main/xdecoder_focall_bestseg.pt) <br/> ## -> Datasets [caption_class_similarity.pth](https://huggingface.co/xdecoder/X-Decoder/resolve/main/caption_class_similarity.pth) <br/> [captions_train2017_filtrefgumdval_filtvlp.json](https://huggingface.co/xdecoder/X-Decoder/resolve/main/captions_train2017_filtrefgumdval_filtvlp.json) <br/> [grounding_train2017_filtrefgumdval_filtvlp.json](https://huggingface.co/xdecoder/X-Decoder/resolve/main/grounding_train2017_filtrefgumdval_filtvlp.json) <br/> [panoptic_train2017_filtrefgumdval_filtvlp.json](https://huggingface.co/xdecoder/X-Decoder/resolve/main/panoptic_train2017_filtrefgumdval_filtvlp.json) <br/> [refcocog_umd_val.json](https://huggingface.co/xdecoder/X-Decoder/resolve/main/refcocog_umd_val.json) <br/> ## -> Evaluations [coco_caption.zip](https://huggingface.co/xdecoder/X-Decoder/resolve/main/coco_caption.zip) <br/>
Realgon/N_bert_twitterfin_padding70model
Realgon
2023-12-27T05:08:59Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-27T04:56:22Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: N_bert_twitterfin_padding70model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # N_bert_twitterfin_padding70model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0123 - Accuracy: 0.8874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6186 | 1.0 | 597 | 0.3664 | 0.8647 | | 0.3355 | 2.0 | 1194 | 0.3325 | 0.8844 | | 0.2398 | 3.0 | 1791 | 0.4079 | 0.8857 | | 0.1511 | 4.0 | 2388 | 0.5350 | 0.8911 | | 0.1077 | 5.0 | 2985 | 0.6086 | 0.8853 | | 0.0367 | 6.0 | 3582 | 0.6945 | 0.8890 | | 0.0368 | 7.0 | 4179 | 0.7918 | 0.8844 | | 0.0283 | 8.0 | 4776 | 0.7927 | 0.8915 | | 0.0236 | 9.0 | 5373 | 0.7818 | 0.8932 | | 0.0204 | 10.0 | 5970 | 0.8325 | 0.8932 | | 0.0168 | 11.0 | 6567 | 0.8979 | 0.8844 | | 0.0101 | 12.0 | 7164 | 0.9055 | 0.8890 | | 0.0088 | 13.0 | 7761 | 0.8781 | 0.8936 | | 0.0054 | 14.0 | 8358 | 0.9046 | 0.8932 | | 0.0062 | 15.0 | 8955 | 0.8997 | 0.8966 | | 0.0037 | 16.0 | 9552 | 0.9535 | 0.8903 | | 0.003 | 17.0 | 10149 | 0.9728 | 0.8915 | | 0.0022 | 18.0 | 10746 | 1.0253 | 0.8869 | | 0.0017 | 19.0 | 11343 | 1.0170 | 0.8890 | | 0.0037 | 20.0 | 11940 | 1.0123 | 0.8874 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
Realgon/N_bert_twitterfin_padding60model
Realgon
2023-12-27T04:56:17Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-27T04:44:13Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: N_bert_twitterfin_padding60model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # N_bert_twitterfin_padding60model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0511 - Accuracy: 0.8911 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5957 | 1.0 | 597 | 0.3652 | 0.8643 | | 0.322 | 2.0 | 1194 | 0.3316 | 0.8794 | | 0.2127 | 3.0 | 1791 | 0.4469 | 0.8802 | | 0.1327 | 4.0 | 2388 | 0.5983 | 0.8798 | | 0.1008 | 5.0 | 2985 | 0.6930 | 0.8815 | | 0.0396 | 6.0 | 3582 | 0.7063 | 0.8827 | | 0.0299 | 7.0 | 4179 | 0.8153 | 0.8827 | | 0.0214 | 8.0 | 4776 | 0.8951 | 0.8794 | | 0.023 | 9.0 | 5373 | 0.8829 | 0.8886 | | 0.0221 | 10.0 | 5970 | 0.8879 | 0.8874 | | 0.0129 | 11.0 | 6567 | 0.9308 | 0.8823 | | 0.0079 | 12.0 | 7164 | 0.9553 | 0.8874 | | 0.012 | 13.0 | 7761 | 0.9391 | 0.8907 | | 0.0061 | 14.0 | 8358 | 1.0109 | 0.8894 | | 0.0034 | 15.0 | 8955 | 1.0525 | 0.8811 | | 0.002 | 16.0 | 9552 | 1.0680 | 0.8874 | | 0.0023 | 17.0 | 10149 | 1.0690 | 0.8874 | | 0.0024 | 18.0 | 10746 | 1.0537 | 0.8874 | | 0.0036 | 19.0 | 11343 | 1.0434 | 0.8899 | | 0.0024 | 20.0 | 11940 | 1.0511 | 0.8911 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
mitchyAI/LeeChaeyoungmchy
mitchyAI
2023-12-27T04:45:46Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-27T04:43:01Z
--- license: creativeml-openrail-m ---
Realgon/N_bert_twitterfin_padding50model
Realgon
2023-12-27T04:44:08Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-27T04:32:33Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: N_bert_twitterfin_padding50model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # N_bert_twitterfin_padding50model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0004 - Accuracy: 0.8874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6211 | 1.0 | 597 | 0.3962 | 0.8492 | | 0.3341 | 2.0 | 1194 | 0.3131 | 0.8911 | | 0.2233 | 3.0 | 1791 | 0.4254 | 0.8874 | | 0.1535 | 4.0 | 2388 | 0.6356 | 0.8819 | | 0.1104 | 5.0 | 2985 | 0.6353 | 0.8886 | | 0.0362 | 6.0 | 3582 | 0.7047 | 0.8886 | | 0.0337 | 7.0 | 4179 | 0.7146 | 0.8865 | | 0.02 | 8.0 | 4776 | 0.7171 | 0.8869 | | 0.0271 | 9.0 | 5373 | 0.7534 | 0.8907 | | 0.0173 | 10.0 | 5970 | 0.8021 | 0.8949 | | 0.0148 | 11.0 | 6567 | 0.8200 | 0.8894 | | 0.0073 | 12.0 | 7164 | 0.9640 | 0.8823 | | 0.0082 | 13.0 | 7761 | 0.9143 | 0.8823 | | 0.0093 | 14.0 | 8358 | 0.9854 | 0.8827 | | 0.0058 | 15.0 | 8955 | 0.9301 | 0.8911 | | 0.0036 | 16.0 | 9552 | 0.9559 | 0.8844 | | 0.003 | 17.0 | 10149 | 0.9667 | 0.8915 | | 0.0019 | 18.0 | 10746 | 0.9877 | 0.8915 | | 0.0023 | 19.0 | 11343 | 0.9900 | 0.8878 | | 0.0027 | 20.0 | 11940 | 1.0004 | 0.8874 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
la-min/t5-finetune-health
la-min
2023-12-27T04:34:41Z
6
1
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-27T03:59:10Z
--- license: mit --- --- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer datasets: - [medical_q&a](https://www.kaggle.com/datasets/thedevastator/comprehensive-medical-q-a-dataset) --- # flan-t5-base-finetuned-medical_q&a This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the medical_q&a dataset. ## Model description When using the model input question, please add "Please answer this question:" ### Training hyperparameters The following hyperparameters were used during training: - L_RATE = 3e-4 - BATCH_SIZE = 3 - PER_DEVICE_EVAL_BATCH = 4 - WEIGHT_DECAY = 0.01 - SAVE_TOTAL_LIM = 3 - NUM_EPOCHS = 3 ### Training results | Training Loss | Epoch | Validation Loss | | :-----------: | :---: | :-------------: | | 1.757200 | 1.0 | 1.453026 | | 1.549100 | 2.0 | 1.313304 | | 1.474500 | 3.0 | 1.264468 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.0 - Tokenizers 0.13.3
jeiku/LongBoros_3.43B
jeiku
2023-12-27T04:30:14Z
14
0
transformers
[ "transformers", "safetensors", "stablelm_epoch", "text-generation", "custom_code", "en", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2023-12-27T04:20:09Z
--- license: other language: - en --- 40 Layer 3.43B test model. See merge.yml for more information.
tfyxj/autotrain-bl992-mguwi
tfyxj
2023-12-27T04:27:35Z
5
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "autotrain", "dataset:tfyxj/autotrain-data-autotrain-bl992-mguwi", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-27T04:26:41Z
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace datasets: - tfyxj/autotrain-data-autotrain-bl992-mguwi --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.12179487179487179 f1_micro: 0.2235294117647059 f1_weighted: 0.08167420814479637 precision_macro: 0.07450980392156863 precision_micro: 0.2235294117647059 precision_weighted: 0.04996539792387543 recall_macro: 0.3333333333333333 recall_micro: 0.2235294117647059 recall_weighted: 0.2235294117647059 accuracy: 0.2235294117647059
HunyStark/q-FrozenLake-v1-4x4-noSlippery
HunyStark
2023-12-27T04:16:11Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-12-27T04:16:06Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="HunyStark/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
offtoung/kikoto-kurage-hisohiso-vits
offtoung
2023-12-27T04:10:13Z
5
0
transformers
[ "transformers", "safetensors", "vits", "text-to-audio", "license:other", "endpoints_compatible", "region:us" ]
text-to-audio
2023-12-22T10:12:35Z
--- license: other license_name: kikoto-kurage-voice-corpus license_link: https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service --- 黄琴海月さんのITAコーパス (ひそひそ) でファインチューニングした音声合成モデルです。事前学習にはReazonSpeechデータセットと、みんなで作るJSUTコーパスを用いました。詳しくは、https://zenn.dev/offtoung/articles/034d98bd397527 をご覧ください。 下記のURLに記載の利用規約の範囲内でご自由に利用いただけます。 ※独自の日本語トークナイザを利用しているため、実行には ez-chat-llm パッケージ (https://github.com/offtoung/ez-chat-llm) 内の eztts モジュールが必要です。 ### モデル構造: VITS (Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech) https://github.com/jaywalnut310/vits ### 学習データ: ReazonSpeechデータセット (https://huggingface.co/datasets/reazon-research/reazonspeech) みんなで作るJSUTコーパス (https://tyc.rei-yumesaki.net/material/minnade-jsut) 黄琴海月ITAコーパス (https://kikyohiroto1227.wixsite.com/kikoto-utau/kurage) ### 利用規約: 黄琴海月ITAコーパス利用規約 https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service コーパス読み上げ音声利用規約に記載の禁止事項を除き、ご自由に利用いただけます。 ただし、本音声モデルから生成された音声を用いた動画等を公開する場合は、本モデルの名称あるいは本モデルを含むソフトウェアの名称 (ez-chat-llm) と音声モデル名をクレジットすることが必須です。 また、音声合成モデルの改変・再配布を行う場合は、黄琴海月ITAコーパス利用規約 (https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service )に従うことが必須です。
Realgon/N_bert_twitterfin_padding20model
Realgon
2023-12-27T04:10:11Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-14T10:58:02Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: N_bert_twitterfin_padding20model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # N_bert_twitterfin_padding20model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0103 - Accuracy: 0.8920 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6026 | 1.0 | 597 | 0.3849 | 0.8589 | | 0.3307 | 2.0 | 1194 | 0.3351 | 0.8832 | | 0.2306 | 3.0 | 1791 | 0.4305 | 0.8865 | | 0.1415 | 4.0 | 2388 | 0.5673 | 0.8827 | | 0.1018 | 5.0 | 2985 | 0.6632 | 0.8794 | | 0.0396 | 6.0 | 3582 | 0.7322 | 0.8819 | | 0.0367 | 7.0 | 4179 | 0.7720 | 0.8874 | | 0.0253 | 8.0 | 4776 | 0.8155 | 0.8836 | | 0.0281 | 9.0 | 5373 | 0.8304 | 0.8853 | | 0.0246 | 10.0 | 5970 | 0.8940 | 0.8882 | | 0.0091 | 11.0 | 6567 | 1.0241 | 0.8823 | | 0.0102 | 12.0 | 7164 | 0.9821 | 0.8874 | | 0.0192 | 13.0 | 7761 | 1.0144 | 0.8765 | | 0.0064 | 14.0 | 8358 | 1.0386 | 0.8861 | | 0.0033 | 15.0 | 8955 | 0.9737 | 0.8907 | | 0.0029 | 16.0 | 9552 | 1.0372 | 0.8890 | | 0.002 | 17.0 | 10149 | 1.0022 | 0.8928 | | 0.0016 | 18.0 | 10746 | 1.0081 | 0.8894 | | 0.0017 | 19.0 | 11343 | 1.0171 | 0.8915 | | 0.0024 | 20.0 | 11940 | 1.0103 | 0.8920 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
tanatapanun/fine-tuned-BioBART-2048-inputs-20-epochs
tanatapanun
2023-12-27T04:09:55Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:GanjinZero/biobart-v2-base", "base_model:finetune:GanjinZero/biobart-v2-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-27T02:46:49Z
--- license: apache-2.0 base_model: GanjinZero/biobart-v2-base tags: - generated_from_trainer metrics: - rouge model-index: - name: fine-tuned-BART-2048-inputs-20-epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-BART-2048-inputs-20-epochs This model is a fine-tuned version of [GanjinZero/biobart-v2-base](https://huggingface.co/GanjinZero/biobart-v2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7640 - Rouge1: 0.318 - Rouge2: 0.1243 - Rougel: 0.2884 - Rougelsum: 0.2894 - Gen Len: 15.42 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 151 | 0.7532 | 0.2007 | 0.0751 | 0.1827 | 0.1821 | 13.29 | | No log | 2.0 | 302 | 0.7148 | 0.261 | 0.0836 | 0.2299 | 0.2312 | 13.92 | | No log | 3.0 | 453 | 0.6995 | 0.248 | 0.0862 | 0.2195 | 0.2201 | 14.49 | | 0.724 | 4.0 | 604 | 0.6956 | 0.2944 | 0.1061 | 0.2658 | 0.2665 | 14.31 | | 0.724 | 5.0 | 755 | 0.7029 | 0.3061 | 0.1203 | 0.2808 | 0.283 | 14.81 | | 0.724 | 6.0 | 906 | 0.6965 | 0.2848 | 0.1118 | 0.2596 | 0.2584 | 15.0 | | 0.5016 | 7.0 | 1057 | 0.7097 | 0.2874 | 0.1207 | 0.2558 | 0.2562 | 15.0 | | 0.5016 | 8.0 | 1208 | 0.7140 | 0.293 | 0.1143 | 0.2617 | 0.2641 | 14.3 | | 0.5016 | 9.0 | 1359 | 0.7191 | 0.3198 | 0.1222 | 0.2877 | 0.2903 | 14.75 | | 0.3838 | 10.0 | 1510 | 0.7274 | 0.3127 | 0.1265 | 0.2863 | 0.2874 | 14.82 | | 0.3838 | 11.0 | 1661 | 0.7312 | 0.3129 | 0.1282 | 0.2821 | 0.2819 | 14.97 | | 0.3838 | 12.0 | 1812 | 0.7419 | 0.2974 | 0.1123 | 0.2726 | 0.2725 | 14.98 | | 0.3838 | 13.0 | 1963 | 0.7441 | 0.2945 | 0.1139 | 0.2682 | 0.2681 | 15.1 | | 0.3153 | 14.0 | 2114 | 0.7490 | 0.2969 | 0.1207 | 0.2743 | 0.2753 | 15.29 | | 0.3153 | 15.0 | 2265 | 0.7536 | 0.2971 | 0.1116 | 0.2674 | 0.2689 | 14.83 | | 0.3153 | 16.0 | 2416 | 0.7564 | 0.301 | 0.1078 | 0.271 | 0.2726 | 15.3 | | 0.2646 | 17.0 | 2567 | 0.7585 | 0.2989 | 0.1117 | 0.2737 | 0.2744 | 15.21 | | 0.2646 | 18.0 | 2718 | 0.7630 | 0.2944 | 0.1078 | 0.2641 | 0.265 | 15.12 | | 0.2646 | 19.0 | 2869 | 0.7632 | 0.2986 | 0.1089 | 0.2669 | 0.2683 | 15.25 | | 0.2428 | 20.0 | 3020 | 0.7640 | 0.318 | 0.1243 | 0.2884 | 0.2894 | 15.42 | ### Framework versions - Transformers 4.36.2 - Pytorch 1.12.1+cu113 - Datasets 2.15.0 - Tokenizers 0.15.0
lorenzreyes/ppo-LunarLander-v2
lorenzreyes
2023-12-27T03:52:41Z
1
0
transformers
[ "transformers", "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-12-11T02:10:43Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -135.53 +/- 105.78 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters
LoneStriker/goliath-120b-2.65bpw-h6-exl2
LoneStriker
2023-12-27T03:49:53Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-27T03:34:02Z
--- license: llama2 language: - en pipeline_tag: conversational --- # Goliath 120B An auto-regressive causal LM created by combining 2x finetuned [Llama-2 70B](https://huggingface.co/meta-llama/llama-2-70b-hf) into one. Please check out the quantized formats provided by [@TheBloke](https:///huggingface.co/TheBloke) and [@Panchovix](https://huggingface.co/Panchovix): - [GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) (llama.cpp) - [GPTQ](https://huggingface.co/TheBloke/goliath-120b-GPTQ) (KoboldAI, TGW, Aphrodite) - [AWQ](https://huggingface.co/TheBloke/goliath-120b-AWQ) (TGW, Aphrodite, vLLM) - [Exllamav2](https://huggingface.co/Panchovix/goliath-120b-exl2) (TGW, KoboldAI) # Prompting Format Both Vicuna and Alpaca will work, but due the initial and final layers belonging primarily to Xwin, I expect Vicuna to work the best. # Merge process The models used in the merge are [Xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [Euryale](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B). The layer ranges used are as follows: ```yaml - range 0, 16 Xwin - range 8, 24 Euryale - range 17, 32 Xwin - range 25, 40 Euryale - range 33, 48 Xwin - range 41, 56 Euryale - range 49, 64 Xwin - range 57, 72 Euryale - range 65, 80 Xwin ``` # Screenshots ![image/png](https://cdn-uploads.huggingface.co/production/uploads/635567189c72a7e742f1419c/Cat8_Rimaz6Ni7YhQiiGB.png) # Benchmarks Coming soon. # Acknowledgements Credits goes to [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit). Special thanks to [@Undi95](https://huggingface.co/Undi95) for helping with the merge ratios.
hkivancoral/hushem_40x_beit_large_adamax_00001_fold5
hkivancoral
2023-12-27T03:45:57Z
5
0
transformers
[ "transformers", "pytorch", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-large-patch16-224", "base_model:finetune:microsoft/beit-large-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-27T02:26:20Z
--- license: apache-2.0 base_model: microsoft/beit-large-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_40x_beit_large_adamax_00001_fold5 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.926829268292683 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_beit_large_adamax_00001_fold5 This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3633 - Accuracy: 0.9268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0116 | 1.0 | 220 | 0.3464 | 0.8780 | | 0.0008 | 2.0 | 440 | 0.2183 | 0.9512 | | 0.0009 | 3.0 | 660 | 0.2250 | 0.9268 | | 0.0006 | 4.0 | 880 | 0.2906 | 0.9268 | | 0.0001 | 5.0 | 1100 | 0.3626 | 0.9268 | | 0.0004 | 6.0 | 1320 | 0.2649 | 0.9512 | | 0.0 | 7.0 | 1540 | 0.4436 | 0.8780 | | 0.0004 | 8.0 | 1760 | 0.4765 | 0.9024 | | 0.0001 | 9.0 | 1980 | 0.4469 | 0.9024 | | 0.0 | 10.0 | 2200 | 0.4327 | 0.8780 | | 0.0 | 11.0 | 2420 | 0.4850 | 0.9268 | | 0.0 | 12.0 | 2640 | 0.4853 | 0.8780 | | 0.0 | 13.0 | 2860 | 0.5574 | 0.8537 | | 0.0 | 14.0 | 3080 | 0.5001 | 0.9024 | | 0.0 | 15.0 | 3300 | 0.4709 | 0.8537 | | 0.0 | 16.0 | 3520 | 0.6659 | 0.8293 | | 0.0 | 17.0 | 3740 | 0.8132 | 0.8293 | | 0.0 | 18.0 | 3960 | 0.7367 | 0.8780 | | 0.0005 | 19.0 | 4180 | 0.2607 | 0.9512 | | 0.0 | 20.0 | 4400 | 0.3217 | 0.9512 | | 0.0 | 21.0 | 4620 | 0.2845 | 0.9512 | | 0.0 | 22.0 | 4840 | 0.5419 | 0.8780 | | 0.0 | 23.0 | 5060 | 0.4106 | 0.9024 | | 0.0 | 24.0 | 5280 | 0.3477 | 0.9024 | | 0.0 | 25.0 | 5500 | 0.4515 | 0.8780 | | 0.0 | 26.0 | 5720 | 0.3857 | 0.9024 | | 0.0 | 27.0 | 5940 | 0.4374 | 0.9024 | | 0.0 | 28.0 | 6160 | 0.5116 | 0.8780 | | 0.0 | 29.0 | 6380 | 0.6248 | 0.8537 | | 0.0 | 30.0 | 6600 | 0.5380 | 0.8780 | | 0.0 | 31.0 | 6820 | 0.5231 | 0.8780 | | 0.0 | 32.0 | 7040 | 0.5186 | 0.8780 | | 0.0 | 33.0 | 7260 | 0.4301 | 0.9024 | | 0.0 | 34.0 | 7480 | 0.4552 | 0.9024 | | 0.0 | 35.0 | 7700 | 0.4309 | 0.9024 | | 0.0 | 36.0 | 7920 | 0.5631 | 0.8780 | | 0.0 | 37.0 | 8140 | 0.5187 | 0.8780 | | 0.0 | 38.0 | 8360 | 0.3960 | 0.9268 | | 0.0 | 39.0 | 8580 | 0.5497 | 0.9024 | | 0.0 | 40.0 | 8800 | 0.4890 | 0.9024 | | 0.0 | 41.0 | 9020 | 0.3987 | 0.9268 | | 0.0 | 42.0 | 9240 | 0.4184 | 0.9268 | | 0.0 | 43.0 | 9460 | 0.3286 | 0.9512 | | 0.0 | 44.0 | 9680 | 0.3483 | 0.9268 | | 0.0 | 45.0 | 9900 | 0.3614 | 0.9268 | | 0.0 | 46.0 | 10120 | 0.3697 | 0.9268 | | 0.0 | 47.0 | 10340 | 0.3577 | 0.9512 | | 0.0 | 48.0 | 10560 | 0.3575 | 0.9512 | | 0.0 | 49.0 | 10780 | 0.3626 | 0.9268 | | 0.0 | 50.0 | 11000 | 0.3633 | 0.9268 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
calvinyz/dqn-SpaceInvadersNoFrameskip-v4
calvinyz
2023-12-27T03:32:55Z
1
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-27T03:32:21Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 621.00 +/- 179.66 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga calvinyz -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga calvinyz -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga calvinyz ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
zebans/bert-base-cased-finetuned-rotten-tomatoes-epochs-5
zebans
2023-12-27T03:24:38Z
1
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:rotten_tomatoes_movie_review", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-27T03:18:54Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - rotten_tomatoes_movie_review metrics: - accuracy - f1 model-index: - name: bert-base-cased-finetuned-rotten-tomatoes-epochs-5 results: - task: name: Text Classification type: text-classification dataset: name: rotten_tomatoes_movie_review type: rotten_tomatoes_movie_review args: default metrics: - name: Accuracy type: accuracy value: 0.975609756097561 - name: F1 type: f1 value: 0.9756096702430234 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-rotten-tomatoes-epochs-5 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the rotten_tomatoes_movie_review dataset. It achieves the following results on the evaluation set: - Loss: 0.1022 - Accuracy: 0.9756 - F1: 0.9756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.595 | 1.0 | 34 | 0.3926 | 0.8780 | 0.8780 | | 0.3767 | 2.0 | 68 | 0.2374 | 0.9390 | 0.9390 | | 0.273 | 3.0 | 102 | 0.1522 | 0.9615 | 0.9615 | | 0.1597 | 4.0 | 136 | 0.1154 | 0.9719 | 0.9719 | | 0.1348 | 5.0 | 170 | 0.1022 | 0.9756 | 0.9756 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.1.0+cu121 - Datasets 1.16.1 - Tokenizers 0.15.0
SimplCup/JackSepticEyeV2
SimplCup
2023-12-27T03:20:35Z
0
0
null
[ "license:cc-by-nc-nd-4.0", "region:us" ]
null
2023-12-27T03:20:04Z
--- license: cc-by-nc-nd-4.0 ---
lorenzreyes/ppo-CartPole-v1
lorenzreyes
2023-12-27T03:20:19Z
0
0
null
[ "tensorboard", "CartPole-v1", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-12-27T03:20:07Z
--- tags: - CartPole-v1 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 240.20 +/- 105.63 name: mean_reward verified: false --- # PPO Agent Playing CartPole-v1 This is a trained model of a PPO agent playing CartPole-v1. # Hyperparameters
prashrex/WizardCoder3b-gguf
prashrex
2023-12-27T03:17:47Z
4
0
transformers
[ "transformers", "gpt_bigcode", "text-generation", "code", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "arxiv:2303.08774", "license:bigcode-openrail-m", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-25T05:17:07Z
--- license: bigcode-openrail-m metrics: - code_eval library_name: transformers tags: - code model-index: - name: WizardCoder-3B-V1.0 results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: 0.348 verified: false --- <h1 style="margin:20px;" align="center">This is a GGUF Version of WizardCoder 3b v1.0</h1> <h2 style="margin:20px;" align="center">Quantization Done by Prashant Vasudevan <a href="https://github.com/vprashrex">Github@vprashrex</a></h2> <h2 style="margin:20px;" align="center">Quantization type Q4_K version</h2> <p align="center"> 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## News - 🔥🔥🔥[2023/08/26] We released **WizardCoder-Python-34B-V1.0** , which achieves the **73.2 pass@1** and surpasses **GPT4 (2023/03/15)**, **ChatGPT-3.5**, and **Claude2** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). - [2023/06/16] We released **WizardCoder-15B-V1.0** , which achieves the **57.3 pass@1** and surpasses **Claude-Plus (+6.8)**, **Bard (+15.3)** and **InstructCodeT5+ (+22.3)** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). ❗Note: There are two HumanEval results of GPT4 and ChatGPT-3.5. The 67.0 and 48.1 are reported by the official GPT4 Report (2023/03/15) of [OpenAI](https://arxiv.org/abs/2303.08774). The 82.0 and 72.5 are tested by ourselves with the latest API (2023/08/26). | Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License | | ----- |------| ---- |------|-------| ----- | ----- | | WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | - Our **WizardMath-70B-V1.0** model slightly outperforms some closed-source LLMs on the GSM8K, including **ChatGPT 3.5**, **Claude Instant 1** and **PaLM 2 540B**. - Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM, and achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM. <font size=4> | Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License| | ----- |------| ---- |------|-------| ----- | ----- | | WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo ](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>| </font> - [08/09/2023] We released **WizardLM-70B-V1.0** model. Here is [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-70B-V1.0). <font size=4> | <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>| | ----- |------| ---- |------|-------| ----- | ----- | ----- | | <sup>**WizardLM-70B-V1.0**</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>📃**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 </sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 </sup>| <sup>Non-commercial</sup>| | <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 </sup>| <sup>Non-commercial</sup> | | <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 </sup> | <sup>Non-commercial</sup>| | <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 </sup>|<sup> Non-commercial</sup>| </font> ## Comparing WizardCoder-Python-34B-V1.0 with Other LLMs. 🔥 The following figure shows that our **WizardCoder-Python-34B-V1.0 attains the second position in this benchmark**, surpassing GPT4 (2023/03/15, 73.2 vs. 67.0), ChatGPT-3.5 (73.2 vs. 72.5) and Claude2 (73.2 vs. 71.2). <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardCoder/imgs/compare_sota.png" alt="WizardCoder" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Prompt Format ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" ``` ## Inference Demo Script We provide the inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo). Note: This script supports `WizardLM/WizardCoder-Python-34B/13B/7B-V1.0`. If you want to inference with `WizardLM/WizardCoder-15B/3B/1B-V1.0`, please change the `stop_tokens = ['</s>']` to `stop_tokens = ['<|endoftext|>']` in the script. ## Citation Please cite the repo if you use the data, method or code in this repo. ``` @misc{luo2023wizardcoder, title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct}, author={Ziyang Luo and Can Xu and Pu Zhao and Qingfeng Sun and Xiubo Geng and Wenxiang Hu and Chongyang Tao and Jing Ma and Qingwei Lin and Daxin Jiang}, year={2023}, } ```
prashrex/Santacoder-gguf
prashrex
2023-12-27T03:15:10Z
7
0
transformers
[ "transformers", "gpt_bigcode", "text-generation", "code", "dataset:bigcode/the-stack", "arxiv:1911.02150", "arxiv:2207.14255", "arxiv:2301.03988", "license:bigcode-openrail-m", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-25T07:59:04Z
--- license: bigcode-openrail-m datasets: - bigcode/the-stack language: - code programming_language: - Java - JavaScript - Python pipeline_tag: text-generation inference: true widget: - text: 'def print_hello_world():' example_title: Hello world group: Python model-index: - name: SantaCoder results: - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL HumanEval (Python) metrics: - name: pass@1 type: pass@1 value: 0.18 verified: false - name: pass@10 type: pass@10 value: 0.29 verified: false - name: pass@100 type: pass@100 value: 0.49 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL MBPP (Python) metrics: - name: pass@1 type: pass@1 value: 0.35 verified: false - name: pass@10 type: pass@10 value: 0.58 verified: false - name: pass@100 type: pass@100 value: 0.77 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL HumanEval (JavaScript) metrics: - name: pass@1 type: pass@1 value: 0.16 verified: false - name: pass@10 type: pass@10 value: 0.27 verified: false - name: pass@100 type: pass@100 value: 0.47 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL MBPP (Javascript) metrics: - name: pass@1 type: pass@1 value: 0.28 verified: false - name: pass@10 type: pass@10 value: 0.51 verified: false - name: pass@100 type: pass@100 value: 0.7 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL HumanEval (Java) metrics: - name: pass@1 type: pass@1 value: 0.15 verified: false - name: pass@10 type: pass@10 value: 0.26 verified: false - name: pass@100 type: pass@100 value: 0.41 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL MBPP (Java) metrics: - name: pass@1 type: pass@1 value: 0.28 verified: false - name: pass@10 type: pass@10 value: 0.44 verified: false - name: pass@100 type: pass@100 value: 0.59 verified: false - task: type: text-generation dataset: type: loubnabnl/humaneval_infilling name: HumanEval FIM (Python) metrics: - name: single_line type: exact_match value: 0.44 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL HumanEval FIM (Java) metrics: - name: single_line type: exact_match value: 0.62 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL HumanEval FIM (JavaScript) metrics: - name: single_line type: exact_match value: 0.6 verified: false - task: type: text-generation dataset: type: code_x_glue_ct_code_to_text name: CodeXGLUE code-to-text (Python) metrics: - name: BLEU type: bleu value: 18.13 verified: false --- <h1 style="margin:20px;" align="center">This is a GGUF Version of SantaCoder</h1> <h2 style="margin:20px;" align="center">Quantization Done by Prashant Vasudevan <a href="https://github.com/vprashrex">Github@vprashrex</a></h2> <h2 style="margin:20px;" align="center">Quantization type Q4_K version</h2> # SantaCoder ![banner](https://huggingface.co/datasets/bigcode/admin/resolve/main/banner.png) Play with the model on the [SantaCoder Space Demo](https://huggingface.co/spaces/bigcode/santacoder-demo). # Table of Contents 1. [Model Summary](#model-summary) 2. [Use](#use) 3. [Limitations](#limitations) 4. [Training](#training) 5. [License](#license) 6. [Citation](#citation) # Model Summary The SantaCoder models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests). The main model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), a context window of 2048 tokens, and was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255). In addition there are several models that were trained on datasets with different filter parameters and with architecture and objective variations. - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM) - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org) - **Paper:** [🎅SantaCoder: Don't reach for the stars!🌟](https://arxiv.org/abs/2301.03988) - **Point of Contact:** [[email protected]](mailto:[email protected]) - **Languages:** Python, Java, and JavaScript |Model|Architecture|Objective|Filtering| |:-|:-|:-|:-| |`mha`|MHA|AR + FIM| Base | |`no-fim`| MQA | AR| Base | |`fim`| MQA | AR + FIM | Base | |`stars`| MQA | AR + FIM | GitHub stars | |`fertility`| MQA | AR + FIM | Tokenizer fertility | |`comments`| MQA | AR + FIM | Comment-to-code ratio | |`dedup-alt`| MQA | AR + FIM | Stronger near-deduplication | |`final`| MQA | AR + FIM | Stronger near-deduplication and comment-to-code ratio | The `final` model is the best performing model and was trained twice as long (236B tokens) as the others. This checkpoint is the default model and available on the `main` branch. All other checkpoints are on separate branches with according names. # Use ## Intended use The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. You should phrase commands like they occur in source code such as comments (e.g. `# the following function computes the sqrt`) or write a function signature and docstring and let the model complete the function body. **Feel free to share your generations in the Community tab!** ## How to use ### Generation ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigcode/santacoder" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, trust_remote_code=True).to(device) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ### Fill-in-the-middle Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output: ```python input_text = "<fim-prefix>def print_hello_world():\n <fim-suffix>\n print('Hello world!')<fim-middle>" inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` Make sure to use `<fim-prefix>, <fim-suffix>, <fim-middle>` and not `<fim_prefix>, <fim_suffix>, <fim_middle>` as in StarCoder models. ### Load other checkpoints We upload the checkpoint of each experiment to a separate branch as well as the intermediate checkpoints as commits on the branches. You can load them with the `revision` flag: ```python model = AutoModelForCausalLM.from_pretrained( "bigcode/santacoder", revision="no-fim", # name of branch or commit hash trust_remote_code=True ) ``` ### Attribution & Other Requirements The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/santacoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code. # Limitations The model has been trained on source code in Python, Java, and JavaScript. The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. # Training ## Model - **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective - **Pretraining steps:** 600K - **Pretraining tokens:** 236 billion - **Precision:** float16 ## Hardware - **GPUs:** 96 Tesla V100 - **Training time:** 6.2 days - **Total FLOPS:** 2.1 x 10e21 ## Software - **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) - **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex) # License The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement). # Citation ``` @article{allal2023santacoder, title={SantaCoder: don't reach for the stars!}, author={Allal, Loubna Ben and Li, Raymond and Kocetkov, Denis and Mou, Chenghao and Akiki, Christopher and Ferrandis, Carlos Munoz and Muennighoff, Niklas and Mishra, Mayank and Gu, Alex and Dey, Manan and others}, journal={arXiv preprint arXiv:2301.03988}, year={2023} } ```
hwhjones/distilhubertmk22
hwhjones
2023-12-27T03:13:22Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2023-12-27T00:30:02Z
--- license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.86 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.8236 - Accuracy: 0.86 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 12 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4322 | 1.0 | 225 | 1.2094 | 0.7 | | 0.8987 | 2.0 | 450 | 1.0620 | 0.63 | | 0.1897 | 3.0 | 675 | 0.6543 | 0.79 | | 0.7499 | 4.0 | 900 | 0.5746 | 0.84 | | 0.0585 | 5.0 | 1125 | 0.6851 | 0.82 | | 0.0127 | 6.0 | 1350 | 0.7394 | 0.82 | | 0.0119 | 7.0 | 1575 | 1.0074 | 0.81 | | 0.0037 | 8.0 | 1800 | 0.8042 | 0.85 | | 0.0027 | 9.0 | 2025 | 0.8673 | 0.84 | | 0.0018 | 10.0 | 2250 | 0.9179 | 0.85 | | 0.0016 | 11.0 | 2475 | 0.8380 | 0.86 | | 0.0016 | 12.0 | 2700 | 0.8236 | 0.86 | ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.4.0 - Tokenizers 0.15.0
Mojarra/calvo
Mojarra
2023-12-27T02:53:24Z
0
0
null
[ "es", "license:apache-2.0", "region:us" ]
null
2023-12-27T02:48:19Z
--- license: apache-2.0 language: - es ---
chanhua/autotrain-izefx-v3qh0
chanhua
2023-12-27T02:52:21Z
6
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "autotrain", "dataset:chanhua/autotrain-data-autotrain-izefx-v3qh0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-27T02:51:53Z
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace datasets: - chanhua/autotrain-data-autotrain-izefx-v3qh0 --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 0.9459153413772583 f1_macro: 0.26666666666666666 f1_micro: 0.5 f1_weighted: 0.4 precision_macro: 0.2222222222222222 precision_micro: 0.5 precision_weighted: 0.3333333333333333 recall_macro: 0.3333333333333333 recall_micro: 0.5 recall_weighted: 0.5 accuracy: 0.5
beomi/open-llama-2-ko-7b
beomi
2023-12-27T02:44:39Z
134
39
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-2", "kollama", "llama-2-ko", "ko", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-14T13:19:21Z
--- language: - ko - en pipeline_tag: text-generation inference: false tags: - facebook - meta - pytorch - llama - llama-2 - kollama - llama-2-ko license: mit library_name: transformers --- **Update Log** - 2023.12.14: Initial Release of Open-Llama-2-Ko # **Open-Llama-2-Ko** 🦙🇰🇷 Open-Llama-2-Ko represents an advanced iteration of the Llama 2 model, featuring an expanded vocabulary and the inclusion of a Korean corpus for enhanced pretraining. Similar to its predecessor, Llama-2-Ko, this model operates within the range of generative text models, with parameter counts ranging from 7 billion to 70 billion. The focus of this repository is on the 7B pretrained version, designed to integrate seamlessly with the Hugging Face Transformers format. The primary distinction between the Llama-2-Ko Series and Open-Llama-2-Ko lies in the dataset. Open-Llama-2-Ko exclusively utilizes publicly accessible Korean corpora, including sources such as [AI Hub](https://www.aihub.or.kr), [Modu Corpus, 모두의 말뭉치](https://corpus.korean.go.kr/), and [Korean Wikipedia](https://dumps.wikimedia.org/kowiki/). As training was conducted solely with publicly available corpora, this model is open for unrestricted use by everyone, adhering to the MIT License*. *MIT License under LLAMA 2 COMMUNITY LICENSE AGREEMENT ## Model Details **Model Developers:** Junbum Lee (Beomi) **Variations:** Open-Llama-2-Ko will be available in different parameter sizes — 7B and 13B — along with various pretrained options. **Input:** The model accepts only text input. **Output:** The model produces text output exclusively. **Model Architecture:** Open-Llama-2-Ko is an auto-regressive language model that leverages an optimized transformer architecture derived from Llama-2. | |Training Data|Parameters|Content Length|GQA|Tokens|Learning Rate| |---|---|---|---|---|---|---| |Llama 2|*A curated mix of Publicly Accessible Korean Corpora*|7B|2k|✘|>15B*|5e<sup>-5</sup>| **Training Corpus** The model was trained using selected datasets from AIHub and Modu Corpus. Detailed information about the training datasets is available below: - AI Hub: [corpus/AI_HUB](./corpus/AI_HUB) - Only the `Training` segment of the data was used. - The `Validation` and `Test` segments were deliberately excluded. - Modu Corpus: [corpus/MODU_CORPUS](./corpus/MODU_CORPUS) The final JSONL dataset used to train this model is approximately 61GB in size. Total token count: Approximately 15 billion tokens (*using the expanded tokenizer. With the original Llama tokenizer, >60 billion tokens.) **Vocab Expansion** | Model Name | Vocabulary Size | Description | | --- | --- | --- | | Original Llama-2 | 32000 | Sentencepiece BPE | | **Expanded Llama-2-Ko** | 46336 | Sentencepiece BPE. Added Korean vocab and merges | **Tokenizing "안녕하세요, 오늘은 날씨가 좋네요."** | Model | Tokens | | --- | --- | | Llama-2 | `['▁', '안', '<0xEB>', '<0x85>', '<0x95>', '하', '세', '요', ',', '▁', '오', '<0xEB>', '<0x8A>', '<0x98>', '은', '▁', '<0xEB>', '<0x82>', '<0xA0>', '씨', '가', '▁', '<0xEC>', '<0xA2>', '<0x8B>', '<0xEB>', '<0x84>', '<0xA4>', '요']` | | Llama-2-Ko | `['▁안녕', '하세요', ',', '▁오늘은', '▁날', '씨가', '▁좋네요']` | **Tokenizing "Llama 2: Open Foundation and Fine-Tuned Chat Models"** | Model | Tokens | | --- | --- | | Llama-2 | `['▁L', 'l', 'ama', '▁', '2', ':', '▁Open', '▁Foundation', '▁and', '▁Fine', '-', 'T', 'un', 'ed', '▁Ch', 'at', '▁Mod', 'els']` | | Llama-2-Ko | `['▁L', 'l', 'ama', '▁', '2', ':', '▁Open', '▁Foundation', '▁and', '▁Fine', '-', 'T', 'un', 'ed', '▁Ch', 'at', '▁Mod', 'els']` | # LICENSE [MIT License under LLAMA 2 COMMUNITY LICENSE AGREEMENT](./LICENSE) # **Model Benchmark** ## LM Eval Harness - Korean (polyglot branch) - Used EleutherAI's lm-evaluation-harness https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot TBD ## Citation TBD ## Acknowledgements - Training support was provided by the [TPU Research Cloud](https://sites.research.google/trc/) program. - The training corpus includes data from [AI Hub](https://www.aihub.or.kr/), [Modu Corpus](https://corpus.korean.go.kr/), and [Korean Wikipedia](https://dumps.wikimedia.org/kowiki/).
Pongsaky/poca-SoccerTwos
Pongsaky
2023-12-27T02:42:26Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-12-27T02:40:15Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Pongsaky/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
rabil/dolphin-2.2.1-AshhLimaRP-Mistral-7B-llamafile
rabil
2023-12-27T02:40:18Z
13
0
null
[ "llamafile", "region:us" ]
null
2023-12-26T16:16:24Z
## dolphin-2.2.1-AshhLimaRP-Mistral-7B-llamafile llamafile lets you distribute and run LLMs with a single file. [announcement blog post](https://hacks.mozilla.org/2023/11/introducing-llamafile/) #### Downloads - [dolphin-2.2.1-ashhlimarp-mistral-7b.Q4_K_M-server.llamafile](https://huggingface.co/rabil/dolphin-2.2.1-AshhLimaRP-Mistral-7B-llamafile/resolve/main/dolphin-2.2.1-ashhlimarp-mistral-7b.Q4_K_M-server.llamafile) - [dolphin-2.2.1-ashhlimarp-mistral-7b.Q5_K_M-server.llamafile](https://huggingface.co/rabil/dolphin-2.2.1-AshhLimaRP-Mistral-7B-llamafile/resolve/main/dolphin-2.2.1-ashhlimarp-mistral-7b.Q5_K_M-server.llamafile) - [dolphin-2.2.1-ashhlimarp-mistral-7b.Q8_0-server.llamafile](https://huggingface.co/rabil/dolphin-2.2.1-AshhLimaRP-Mistral-7B-llamafile/resolve/main/dolphin-2.2.1-ashhlimarp-mistral-7b.Q8_0-server.llamafile) This repository was created using the [llamafile-builder](https://github.com/rabilrbl/llamafile-builder)
GR4V311/1
GR4V311
2023-12-27T02:39:38Z
0
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-03T00:09:14Z
--- language: - en license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Anything V5 (https://civitai.com/models/9409) # Uploaded by the Real Anything V3 Author # Please try it
wac81/toy_retnet_1.3b
wac81
2023-12-27T02:37:55Z
2
0
transformers
[ "transformers", "pytorch", "retnet", "fill-mask", "arxiv:2307.08621", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-12-26T13:11:14Z
## 介绍 (Introduction) retnet-1.3B-toy 是一个开源模型。主要是为探索模型小型化,测试小数据量训练的最佳效果。 1. 根据retnet论文([https://arxiv.org/pdf/2307.08621.pdf](https://arxiv.org/pdf/2307.08621.pdf))开发并基于transformer文本生成模型。该仓库的算法实现根据repo进行([https://github.com/syncdoth/RetNet.git](https://github.com/syncdoth/RetNet.git)) 2. 该仓库目标是建立一个retnet基础训练仓库,建议做学习研究使用,不建议商用。 3. 该仓库只使用wiki文本和少量sharegpt/belle/多轮指令数据集训练而成。包含中英文数据,数据估算占比7:3。 4. 本次放出pretrain模型与sft微调后模型。 5. 本模型使用了tokenizer为百川大模型的第一版分词器,共包含64000个vocab。 6. 已知问题: - 会出现重复句子回答,可以调节topk减轻该问题。 - 会出现回答不全问题,可以提高max_new_token缓解该问题。 - 由于知识储备不足,回答准确性一般。 retnet-1.3B-toy is an open source model. 1. Developed according to retnet paper ([https://arxiv.org/pdf/2307.08621.pdf](https://arxiv.org/pdf/2307.08621.pdf)) and based on transformer text generation model. The algorithmic implementation of this repository is carried out according to repo ([https://github.com/syncdoth/RetNet.git](https://github.com/syncdoth/RetNet.git)) 2. The goal of this repository is to suggest a retnet base training repository, which is recommended to be used for learning research and not for commercial use. 3. This repository is trained using only wiki text and a small amount of sharegpt/belle instruction dataset. 4. This release pretrain model with sft fine-tuned model. 5. This model uses the tokenizer as the first version of the BaiChuan model tokenizer, which contains a total of 64,000 vocabs. 6. known issues: - Repeated sentence answers will occur, topk can be adjusted to mitigate the problem. - Incomplete answers will occur, you can increase max_new_token to alleviate the problem. - Answer accuracy is average due to insufficient knowledge base. ## 软件依赖 (Dependencies) ```shell pip install torch transformers ``` ## 模型&代码仓库(Model&Code Repo) 1. 基础预训练模型(pretrain model) ([https://huggingface.co/wac81/toy_retnet_1.3b_pretrain](https://huggingface.co/wac81/toy_retnet_1.3b_pretrain)) 2. sft微调后模型(sft model) ([https://huggingface.co/wac81/toy_retnet_1.3b](https://huggingface.co/wac81/toy_retnet_1.3b)) 3. Code Repo ([https://github.com/wac81/toy_retnet_1.3b](https://github.com/wac81/toy_retnet_1.3b)) ## 最小需求 (Minimum Requirements) 模型可以完全加载在8GB显卡上,8bit/4bit量化后,理论上可以加载在4GB显卡上 The model can be fully loaded on an 8GB graphics card, and after 8bit or 4bit quantization, it can theoretically be loaded on a 4GB graphics card ## 代码调用 (Code Usage) sft模型下载后放入checkpoints/checkpoint-21000目录,可以通过如下代码调用 retnet-1.3B-toy 模型来生成对话: After the sft model is downloaded and put into the checkpoints/checkpoint-21000 directory, you can call the retnet-1.3B-toy model to generate a dialog with the following code: python generate.py ```shell user:中医如何医治风寒 system:中医的治疗方法主要包括针灸、针灸、推拿、太极拳等。针灸可以帮助人体解毒、调节身体温度,针灸可以刺激人体的血液循环,推拿可以促进血液循环,推拿可以促进血液循环,从而缓解身体不适。针灸可以帮助人体解毒、调节身体温度,推拿可以促进血液循环,从而缓解身体不适。太极拳则可以帮助人体解毒、调节身体温度,推拿可以促进血液循环,从而缓解身体不适。太极拳则可以帮助人体解毒、调节身体温度,推拿可以促进血液循环, ``` ## 协议 (License) 本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,retnet-1.3B-toy 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。 The code in this repository is open-sourced under the [Apache-2.0 license](LICENSE), while the use of the retnet-1.3B-toy model weights needs to comply with the [Model License](MODEL_LICENSE).
jan-hq/stealth-v1.1
jan-hq
2023-12-27T02:31:19Z
13
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-21T14:14:52Z
--- license: apache-2.0 language: - en --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <p align="center"> <a href="https://jan.ai/">Jan</a > - <a href="https://discord.gg/AsJ8krTT3N">Discord</a> </p> <!-- header end --> # Prompt template ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` # Run this model You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux. Jan is an open source, ChatGPT alternative that is: - 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you. - 🗂️ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time. - 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints - 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/r7VmEBLGXpPLTu2MImM7S.png) # About Jan Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones. Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
chanhua/autotrain-rnjto-gg00g
chanhua
2023-12-27T02:09:00Z
5
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "autotrain", "dataset:chanhua/autotrain-data-autotrain-rnjto-gg00g", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-27T02:08:31Z
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace datasets: - chanhua/autotrain-data-autotrain-rnjto-gg00g --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 1.0826029777526855 f1_macro: 0.5555555555555555 f1_micro: 0.6666666666666666 f1_weighted: 0.5555555555555555 precision_macro: 0.5 precision_micro: 0.6666666666666666 precision_weighted: 0.5 recall_macro: 0.6666666666666666 recall_micro: 0.6666666666666666 recall_weighted: 0.6666666666666666 accuracy: 0.6666666666666666
chanhua/autotrain-krvpy-mebgz
chanhua
2023-12-27T02:04:39Z
6
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "autotrain", "dataset:chanhua/autotrain-data-autotrain-krvpy-mebgz", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-27T02:04:04Z
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace datasets: - chanhua/autotrain-data-autotrain-krvpy-mebgz --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 1.0846457481384277 f1_macro: 0.26666666666666666 f1_micro: 0.5 f1_weighted: 0.4 precision_macro: 0.2222222222222222 precision_micro: 0.5 precision_weighted: 0.3333333333333333 recall_macro: 0.3333333333333333 recall_micro: 0.5 recall_weighted: 0.5 accuracy: 0.5
quantux/ppo-LunarLander
quantux
2023-12-27T02:00:20Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-27T01:58:32Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: ppo results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 247.19 +/- 16.09 name: mean_reward verified: false --- # **ppo** Agent playing **LunarLander-v2** This is a trained model of a **ppo** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
JanLilan/speecht5_finetuned_openslr-slr69-cat
JanLilan
2023-12-27T01:57:13Z
11
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "text-to-speech", "ca", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
2023-12-26T12:12:39Z
--- license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer - text-to-speech model-index: - name: speecht5_finetuned_openslr-slr69-cat results: [] language: - ca task: text-to-speech --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_openslr-slr69-cat This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on a [projecte-aina/openslr-slr69-ca-trimmed-denoised](https://huggingface.co/datasets/projecte-aina/openslr-slr69-ca-trimmed-denoised) dataset. It achieves the following results on the evaluation set: - eval_loss: 0.4427 - eval_runtime: 14.1078 - eval_samples_per_second: 30.054 - eval_steps_per_second: 15.027 - epoch: 16.77 - step: 2000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0
HimashaJ96/Me
HimashaJ96
2023-12-27T01:44:05Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:TheBloke/zephyr-7B-beta-GPTQ", "base_model:adapter:TheBloke/zephyr-7B-beta-GPTQ", "region:us" ]
null
2023-12-27T01:43:47Z
--- library_name: peft base_model: TheBloke/zephyr-7B-beta-GPTQ --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
elyza/ELYZA-japanese-Llama-2-13b-fast-instruct
elyza
2023-12-27T01:41:51Z
1,458
22
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ja", "en", "arxiv:2307.09288", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-25T18:14:10Z
--- license: llama2 language: - ja - en --- ## ELYZA-japanese-Llama-2-13b-fast-instruct ![ELYZA-Japanese-Llama2-image](./key_visual.png) ### Model Description **ELYZA-japanese-Llama-2-13b** は、 Llama 2をベースとして日本語能力を拡張するために追加事前学習を行ったモデルです。 詳細は [Blog記事](https://note.com/elyza/n/n5d42686b60b7) を参照してください。 ### Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer B_INST, E_INST = "[INST]", "[/INST]" B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n" DEFAULT_SYSTEM_PROMPT = "あなたは誠実で優秀な日本人のアシスタントです。" text = "仕事の熱意を取り戻すためのアイデアを5つ挙げてください。" model_name = "elyza/ELYZA-japanese-Llama-2-13b-fast-instruct" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, use_cache=True, device_map="auto", low_cpu_mem_usage=True, ) model.eval() prompt = "{bos_token}{b_inst} {system}{prompt} {e_inst} ".format( bos_token=tokenizer.bos_token, b_inst=B_INST, system=f"{B_SYS}{DEFAULT_SYSTEM_PROMPT}{E_SYS}", prompt=text, e_inst=E_INST, ) token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_new_tokens=256, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, ) output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1) :], skip_special_tokens=True) print(output) ``` ### ELYZA-japanese-Llama-2-13b Models | Model Name | Vocab Size | #Params | |:---------------------------------------------|:----------:|:-------:| |[elyza/ELYZA-japanese-Llama-2-13b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b)| 32000 | 13.02B | |[elyza/ELYZA-japanese-Llama-2-13b-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b-instruct)| 32000 | 13.02B | |[elyza/ELYZA-japanese-Llama-2-13b-fast](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b-fast)| 44581 | 13.14B | |[elyza/ELYZA-japanese-Llama-2-13b-fast-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b-fast-instruct)| 44581 | 13.14B | ### Developers - [Akira Sasaki](https://huggingface.co/akirasasaki) - [Masato Hirakawa](https://huggingface.co/m-hirakawa) - [Shintaro Horie](https://huggingface.co/e-mon) - [Tomoaki Nakamura](https://huggingface.co/tyoyo) - [Sam Passaglia](https://huggingface.co/passaglia) - [Daisuke Oba](https://huggingface.co/daisuk30ba) (intern) ### Licence Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. ### How to Cite ```tex @misc{elyzallama2023, title={ELYZA-japanese-Llama-2-13b}, url={https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b}, author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura and Sam Passaglia and Daisuke Oba}, year={2023}, } ``` ### Citations ```tex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Sakshi1307/test3
Sakshi1307
2023-12-27T01:41:47Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2023-12-27T01:41:36Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
elyza/ELYZA-japanese-Llama-2-13b-fast
elyza
2023-12-27T01:41:31Z
1,413
7
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ja", "en", "arxiv:2307.09288", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-25T17:14:44Z
--- license: llama2 language: - ja - en --- ## ELYZA-japanese-Llama-2-13b-fast ![ELYZA-Japanese-Llama2-image](./key_visual.png) ### Model Description **ELYZA-japanese-Llama-2-13b** は、 Llama 2をベースとして日本語能力を拡張するために追加事前学習を行ったモデルです。 詳細は [Blog記事](https://note.com/elyza/n/n5d42686b60b7) を参照してください。 ### Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "elyza/ELYZA-japanese-Llama-2-13b-fast" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, use_cache=True, device_map="auto", low_cpu_mem_usage=True, ) model.eval() text = "自然言語処理とは、" token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_new_tokens=256, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, ) output = tokenizer.decode(output_ids.tolist()[0], skip_special_tokens=True) print(output) ``` ### ELYZA-japanese-Llama-2-13b Models | Model Name | Vocab Size | #Params | |:---------------------------------------------|:----------:|:-------:| |[elyza/ELYZA-japanese-Llama-2-13b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b)| 32000 | 13.02B | |[elyza/ELYZA-japanese-Llama-2-13b-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b-instruct)| 32000 | 13.02B | |[elyza/ELYZA-japanese-Llama-2-13b-fast](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b-fast)| 44581 | 13.14B | |[elyza/ELYZA-japanese-Llama-2-13b-fast-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b-fast-instruct)| 44581 | 13.14B | ### Developers - [Akira Sasaki](https://huggingface.co/akirasasaki) - [Masato Hirakawa](https://huggingface.co/m-hirakawa) - [Shintaro Horie](https://huggingface.co/e-mon) - [Tomoaki Nakamura](https://huggingface.co/tyoyo) - [Sam Passaglia](https://huggingface.co/passaglia) - [Daisuke Oba](https://huggingface.co/daisuk30ba) (intern) ### Licence Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. ### How to Cite ```tex @misc{elyzallama2023, title={ELYZA-japanese-Llama-2-13b}, url={https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b}, author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura and Sam Passaglia and Daisuke Oba}, year={2023}, } ``` ### Citations ```tex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Intel/table-transformer-int8-static-inc
Intel
2023-12-27T01:37:27Z
0
3
null
[ "onnx", "table-transformer", "table detection", "table structure recognition", "int8", "Intel® Neural Compressor", "neural-compressor", "PostTrainingStatic", "dataset:bsmock/pubtables-1m", "license:mit", "region:us" ]
null
2023-12-27T01:15:52Z
--- license: mit tags: - table-transformer - table detection - table structure recognition - int8 - Intel® Neural Compressor - neural-compressor - PostTrainingStatic - onnx datasets: - bsmock/pubtables-1m --- # INT8 Table Transformer ## Post-training static quantization ### ONNX This repo contains the models for: 1) Table detection, 2) Table structure recognition, The original FP32 PyTorch model comes from [bsmock/tatr-pubtables1m-v1.0](https://huggingface.co/bsmock/tatr-pubtables1m-v1.0). The INT8 ONNX models are quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor). Refer to this [link](https://github.com/intel/neural-compressor/tree/master/examples/onnxrt/object_detection/table_transformer/quantization/ptq_static) for model preparation, quantization and benchmark scripts. #### Test result Table detection: | |INT8|FP32| |---|:---:|:---:| | **COCO metrics (AP)** |0.9691|0.9706| | **Model size (MB)** |56|111| Table structure recognition: | |INT8|FP32| |---|:---:|:---:| | **Model size (MB)** |56|111|
Entreprenerdly/cat-toy
Entreprenerdly
2023-12-27T01:33:15Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-27T01:32:14Z
--- license: creativeml-openrail-m tags: - text-to-image --- ### Cat toy on Stable Diffusion via Dreambooth #### model by CrisVelasquez This your the Stable Diffusion model fine-tuned the Cat toy concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **<cat-toy> toy** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/CrisVelasquez/cat-toy/resolve/main/concept_images/0.jpeg) ![image 1](https://huggingface.co/CrisVelasquez/cat-toy/resolve/main/concept_images/1.jpeg) ![image 2](https://huggingface.co/CrisVelasquez/cat-toy/resolve/main/concept_images/2.jpeg) ![image 3](https://huggingface.co/CrisVelasquez/cat-toy/resolve/main/concept_images/3.jpeg)
jeiku/Rosa_v1_3.43B_GGUF
jeiku
2023-12-27T01:18:39Z
16
0
null
[ "gguf", "en", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2023-12-26T22:12:28Z
--- license: other language: - en --- Check merge.yml for more information on the creation of this model. This model, much like Damascus steel, includes layers of high quality merges, extended out to 40 overall layers spread over 3 merged models which include at least 4 models each. This model includes som essay writing components, som medical components, a small amount of RAG processing components and many roleplaying and conversational components. I have tested this model, and it has proven interesting enough to be the daily driver for my mobile device. FP16 available here: https://huggingface.co/jeiku/Rosa_v1_3.34B
jeiku/Rosa_v1_3.43B
jeiku
2023-12-27T01:16:55Z
14
0
transformers
[ "transformers", "safetensors", "stablelm_epoch", "text-generation", "conversational", "custom_code", "en", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2023-12-26T21:43:10Z
--- license: other language: - en --- Check merge.yml for more information on the creation of this model. This model, much like Damascus steel, includes layers of high quality merges, extended out to 40 overall layers spread over 3 merged models which include at least 4 models each. This model includes som essay writing components, som medical components, a small amount of RAG processing components and many roleplaying and conversational components. I have tested this model, and it has proven interesting enough to be the daily driver for my mobile device. GGUF available here: https://huggingface.co/jeiku/Rosa_v1_3.34B_GGUF
Ahmed107/hamsa-lora-v13
Ahmed107
2023-12-27T01:07:29Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai/whisper-medium", "base_model:adapter:openai/whisper-medium", "region:us" ]
null
2023-12-27T01:07:25Z
--- library_name: peft base_model: openai/whisper-medium --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
calvinyz/ppo-LunarLander-v2
calvinyz
2023-12-27T01:05:12Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-27T01:04:52Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 253.45 +/- 22.66 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LucyintheSky/pose-estimation-front-side-back
LucyintheSky
2023-12-27T00:49:20Z
257
1
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-10-03T16:25:14Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: pose-estimation-front-side-back results: [] --- # Pose Estimation: front,side,back ## Model description This model predicts the person's body position relative to the camera: **front, side, back**. It was trained on [Lucy in the Sky](https://www.lucyinthesky.com/shop) images. This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k). ## Training and evaluation data It achieves the following results on the evaluation set: - Loss: 0.2524 - Accuracy: 0.9355 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
calvinyz/q-Taxi-v3
calvinyz
2023-12-27T00:36:13Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-12-27T00:36:12Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="calvinyz/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
calvinyz/q-FrozenLake-v1-4x4-noSlippery
calvinyz
2023-12-27T00:32:34Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-12-27T00:32:32Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="calvinyz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Thaweewat/whisper-th-small-ct2
Thaweewat
2023-12-27T00:12:15Z
10
0
transformers
[ "transformers", "whisper", "Pytorch", "th", "base_model:biodatlab/whisper-th-small-combined", "base_model:finetune:biodatlab/whisper-th-small-combined", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-12-26T20:59:58Z
--- license: apache-2.0 language: - th base_model: biodatlab/whisper-th-small-combined tags: - whisper - Pytorch --- # Whisper-th-small-ct2 whisper-th-small-ct2 is the CTranslate2 format of [biodatlab/whisper-th-small-combined](https://huggingface.co/biodatlab/whisper-th-small-combined), comparable with [WhisperX](https://github.com/m-bain/whisperX) and [faster-whisper](https://github.com/SYSTRAN/faster-whisper), which enables: - 🤏 **Half the size** of original Huggingface format. - ⚡️ Batched inference for **70x** real-time transcription. - 🪶 A faster-whisper backend, requiring **<8GB GPU memory** with beam_size=5. - 🎯 Accurate word-level timestamps using wav2vec2 alignment. - 👯‍♂️ Multispeaker ASR using speaker diarization(includes speaker ID labels). - 🗣️ VAD preprocessing, reducing hallucinations and allowing batching with no WER degradation. ### Usage ```python !pip install git+https://github.com/m-bain/whisperx.git import whisperx import time # Setting device = "cuda" audio_file = "audio.mp3" batch_size = 16 compute_type = "float16" """ Your Hugging Face token for the Diarization model is required. Additionally, you need to accept the terms and conditions before use. Please visit the model page here. https://huggingface.co/pyannote/segmentation-3.0 """ HF_TOKEN = "" # load model and transcript model = whisperx.load_model("Thaweewat/whisper-th-small-ct2", device, compute_type=compute_type) st_time = time.time() audio = whisperx.load_audio(audio_file) result = model.transcribe(audio, batch_size=batch_size) # Assign speaker labels diarize_model = whisperx.DiarizationPipeline(use_auth_token=HF_TOKEN, device=device) diarize_segments = diarize_model(audio) result = whisperx.assign_word_speakers(diarize_segments, result) # Combine pure text if needed combined_text = ' '.join(segment['text'] for segment in result['segments']) print(f"Response time: {time.time() - st_time} seconds") print(diarize_segments) print(result) print(combined_text) ```
ishaanpaul/q-FrozenLake-v1-4x4-noSlippery
ishaanpaul
2023-12-27T00:11:19Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-12-27T00:11:16Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ishaanpaul/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
SamSJackson/ppo-SnowballTarget
SamSJackson
2023-12-26T23:59:40Z
15
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-12-26T23:59:31Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: SamSJackson/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
nicomp/myModel
nicomp
2023-12-26T23:57:23Z
0
0
adapter-transformers
[ "adapter-transformers", "text-classification", "en", "dataset:fka/awesome-chatgpt-prompts", "license:mit", "region:us" ]
text-classification
2023-12-26T23:44:28Z
--- license: mit datasets: - fka/awesome-chatgpt-prompts language: - en metrics: - accuracy library_name: adapter-transformers pipeline_tag: text-classification ---
graceneutrality/ppo-lunarlander
graceneutrality
2023-12-26T23:52:33Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-26T23:52:09Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 229.68 +/- 79.25 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Sakshi1307/ds2
Sakshi1307
2023-12-26T23:46:21Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2023-12-26T23:43:14Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
hkivancoral/hushem_40x_beit_large_adamax_00001_fold2
hkivancoral
2023-12-26T23:46:14Z
4
0
transformers
[ "transformers", "pytorch", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-large-patch16-224", "base_model:finetune:microsoft/beit-large-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-26T22:28:21Z
--- license: apache-2.0 base_model: microsoft/beit-large-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: hushem_40x_beit_large_adamax_00001_fold2 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.8444444444444444 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_beit_large_adamax_00001_fold2 This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5239 - Accuracy: 0.8444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0134 | 1.0 | 215 | 0.7143 | 0.7556 | | 0.0005 | 2.0 | 430 | 0.8825 | 0.8444 | | 0.0002 | 3.0 | 645 | 1.1645 | 0.8 | | 0.0002 | 4.0 | 860 | 1.1853 | 0.8 | | 0.0001 | 5.0 | 1075 | 1.2007 | 0.8 | | 0.0001 | 6.0 | 1290 | 1.1677 | 0.8222 | | 0.0006 | 7.0 | 1505 | 1.1023 | 0.8222 | | 0.0001 | 8.0 | 1720 | 1.5156 | 0.7333 | | 0.0 | 9.0 | 1935 | 1.1716 | 0.8222 | | 0.0 | 10.0 | 2150 | 1.2763 | 0.8222 | | 0.0 | 11.0 | 2365 | 1.1176 | 0.8444 | | 0.0 | 12.0 | 2580 | 1.2233 | 0.8444 | | 0.0023 | 13.0 | 2795 | 1.5312 | 0.8 | | 0.0 | 14.0 | 3010 | 1.3548 | 0.8 | | 0.0 | 15.0 | 3225 | 1.2898 | 0.8222 | | 0.0 | 16.0 | 3440 | 1.2810 | 0.8222 | | 0.0 | 17.0 | 3655 | 1.3480 | 0.8222 | | 0.0 | 18.0 | 3870 | 1.2231 | 0.8444 | | 0.0 | 19.0 | 4085 | 1.2120 | 0.8444 | | 0.0 | 20.0 | 4300 | 1.3990 | 0.8222 | | 0.0 | 21.0 | 4515 | 1.3925 | 0.8222 | | 0.0 | 22.0 | 4730 | 1.3055 | 0.8444 | | 0.0 | 23.0 | 4945 | 1.3624 | 0.8222 | | 0.0 | 24.0 | 5160 | 1.3420 | 0.8222 | | 0.0 | 25.0 | 5375 | 1.3903 | 0.8222 | | 0.0 | 26.0 | 5590 | 1.3025 | 0.8444 | | 0.0 | 27.0 | 5805 | 1.3676 | 0.8444 | | 0.0 | 28.0 | 6020 | 1.3843 | 0.8444 | | 0.0 | 29.0 | 6235 | 1.4718 | 0.8 | | 0.0 | 30.0 | 6450 | 1.4946 | 0.8222 | | 0.0 | 31.0 | 6665 | 1.5006 | 0.8222 | | 0.0 | 32.0 | 6880 | 1.5270 | 0.8222 | | 0.0 | 33.0 | 7095 | 1.6386 | 0.8 | | 0.0 | 34.0 | 7310 | 1.5335 | 0.8222 | | 0.0 | 35.0 | 7525 | 1.5020 | 0.8444 | | 0.0 | 36.0 | 7740 | 1.5220 | 0.8444 | | 0.0 | 37.0 | 7955 | 1.6305 | 0.8 | | 0.0 | 38.0 | 8170 | 1.5482 | 0.8 | | 0.0 | 39.0 | 8385 | 1.5491 | 0.8 | | 0.0 | 40.0 | 8600 | 1.5716 | 0.8222 | | 0.0 | 41.0 | 8815 | 1.5929 | 0.8222 | | 0.0 | 42.0 | 9030 | 1.5745 | 0.8222 | | 0.0 | 43.0 | 9245 | 1.4702 | 0.8444 | | 0.0 | 44.0 | 9460 | 1.4777 | 0.8444 | | 0.0 | 45.0 | 9675 | 1.4961 | 0.8444 | | 0.0 | 46.0 | 9890 | 1.5108 | 0.8444 | | 0.0 | 47.0 | 10105 | 1.5228 | 0.8444 | | 0.0 | 48.0 | 10320 | 1.5215 | 0.8444 | | 0.0 | 49.0 | 10535 | 1.5246 | 0.8444 | | 0.0032 | 50.0 | 10750 | 1.5239 | 0.8444 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
LarryAIDraw/Shinomiya_KaguyaV1_0
LarryAIDraw
2023-12-26T23:40:23Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-26T23:35:34Z
--- license: creativeml-openrail-m --- https://civitai.com/models/243193/shinomiya-kaguya
LarryAIDraw/LoRA_Nefertari
LarryAIDraw
2023-12-26T23:40:12Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-26T23:35:00Z
--- license: creativeml-openrail-m --- https://civitai.com/models/243404/lora-nefertari-vivi-one-piece
andreatorch/Reinforce-Unit5-SnowballTarget
andreatorch
2023-12-26T23:34:24Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-12-26T23:34:20Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: andreatorch/Reinforce-Unit5-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
LarryAIDraw/usagi_tsukishiro_s1-lora-nochekaiser
LarryAIDraw
2023-12-26T23:33:14Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-26T23:29:11Z
--- license: creativeml-openrail-m --- https://civitai.com/models/244201/usagi-tsukishiro-my-life-as-inukai-sans-dog
LarryAIDraw/shion__tesei_shitara_slime_datta_ken_
LarryAIDraw
2023-12-26T23:33:01Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-26T23:28:41Z
--- license: creativeml-openrail-m --- https://civitai.com/models/244029/shion-that-time-i-got-reincarnated-as-a-slime
LarryAIDraw/shuna__tensei_shitara_slime_datta_ken_
LarryAIDraw
2023-12-26T23:32:50Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-26T23:28:10Z
--- license: creativeml-openrail-m --- https://civitai.com/models/243909/shuna-that-time-i-got-reincarnated-as-a-slime
LarryAIDraw/Character_ort_byleth_fe3h_v0_8
LarryAIDraw
2023-12-26T23:32:19Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-26T23:26:52Z
--- license: creativeml-openrail-m --- https://civitai.com/models/243574/byleth-female-fire-emblem-three-houses
LilaASMR/nuevoModel
LilaASMR
2023-12-26T23:29:43Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-12-26T23:22:26Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # fis-tuned-model This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('fis-tuned-model') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('fis-tuned-model') model = AutoModel.from_pretrained('fis-tuned-model') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=fis-tuned-model) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 427 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 12, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_chatGPT_temp1_Seed103
behzadnet
2023-12-26T23:26:08Z
1
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
2023-12-26T23:26:06Z
--- library_name: peft base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
abdel1311/a2c-PandaReachDense-v3
abdel1311
2023-12-26T23:15:30Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-26T23:11:14Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.22 +/- 0.14 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
User1115/whisper-large-v2-test-singleWord-small-sec-30steps-drop
User1115
2023-12-26T23:13:55Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai/whisper-large-v2", "base_model:adapter:openai/whisper-large-v2", "region:us" ]
null
2023-12-26T23:13:51Z
--- library_name: peft base_model: openai/whisper-large-v2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
Rouhan/my_awesome_qa_model
Rouhan
2023-12-26T23:06:08Z
23
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-12-25T19:58:53Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2872 | 1.0 | 2500 | 1.1891 | | 0.9911 | 2.0 | 5000 | 1.1156 | | 0.7923 | 3.0 | 7500 | 1.1458 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0
pavitemple/finetuned-Accident-SingleLabel-Final-v4
pavitemple
2023-12-26T23:02:44Z
4
0
transformers
[ "transformers", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2023-12-26T19:38:12Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: finetuned-Accident-SingleLabel-Final-v4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-Accident-SingleLabel-Final-v4 This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2089 - Accuracy: 0.5588 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 65 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.06 | 4 | 1.8526 | 0.1304 | | No log | 1.06 | 8 | 1.6872 | 0.3913 | | 1.6209 | 2.06 | 12 | 1.4251 | 0.5652 | | 1.6209 | 3.06 | 16 | 1.2785 | 0.5652 | | 1.1184 | 4.06 | 20 | 1.1813 | 0.6522 | | 1.1184 | 5.06 | 24 | 1.0827 | 0.4783 | | 1.1184 | 6.06 | 28 | 0.9423 | 0.6522 | | 1.0086 | 7.06 | 32 | 0.9538 | 0.6522 | | 1.0086 | 8.06 | 36 | 0.8784 | 0.6087 | | 0.7591 | 9.06 | 40 | 0.9870 | 0.6087 | | 0.7591 | 10.06 | 44 | 0.9913 | 0.6522 | | 0.7591 | 11.06 | 48 | 0.8661 | 0.6087 | | 0.6925 | 12.06 | 52 | 0.8789 | 0.5652 | | 0.6925 | 13.06 | 56 | 0.8263 | 0.6522 | | 0.7497 | 14.06 | 60 | 1.0772 | 0.6522 | | 0.7497 | 15.06 | 64 | 0.9689 | 0.6522 | | 0.7497 | 16.02 | 65 | 0.8805 | 0.6522 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0 - Datasets 2.14.6 - Tokenizers 0.14.1
MBZUAI/GLaMM-RefSeg
MBZUAI
2023-12-26T22:51:19Z
10
1
transformers
[ "transformers", "pytorch", "llava", "text-generation", "arxiv:2311.03356", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-26T18:26:20Z
--- license: apache-2.0 --- # 👁️ GLaMM-RefSeg --- ## 📝 Description GLaMM-RegCap-VG is the model specific to referring expression segmentation. "RefSeg" denotes its focus on segmentation tasks related to referring expressions. ## 💻 Download To get started with GLaMM-RefSeg, follow these steps: ``` git lfs install git clone https://huggingface.co/MBZUAI/GLaMM-RefSeg ``` ## 📚 Additional Resources - **Paper:** [ArXiv](https://arxiv.org/abs/2311.03356). - **GitHub Repository:** For training and updates: [GitHub - GLaMM](https://github.com/mbzuai-oryx/groundingLMM). - **Project Page:** For a detailed overview and insights into the project, visit our [Project Page - GLaMM](https://mbzuai-oryx.github.io/groundingLMM/). ## 📜 Citations and Acknowledgments ```bibtex @article{hanoona2023GLaMM, title={GLaMM: Pixel Grounding Large Multimodal Model}, author={Rasheed, Hanoona and Maaz, Muhammad and Shaji, Sahal and Shaker, Abdelrahman and Khan, Salman and Cholakkal, Hisham and Anwer, Rao M. and Xing, Eric and Yang, Ming-Hsuan and Khan, Fahad S.}, journal={ArXiv 2311.03356}, year={2023} }
Rezakakooee/distilbert-base-uncased-finetuned-imdb
Rezakakooee
2023-12-26T22:51:15Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-12-26T22:40:40Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4826 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6903 | 1.0 | 157 | 2.5000 | | 2.5702 | 2.0 | 314 | 2.4713 | | 2.5245 | 3.0 | 471 | 2.4562 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0
vkamenski/ppo-LunarLander-v2
vkamenski
2023-12-26T22:40:02Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-26T22:39:40Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 255.54 +/- 19.15 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
fbellame/mistral-7b-json-quizz-fine-tuned-trl
fbellame
2023-12-26T22:33:11Z
9
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2023-12-26T18:11:46Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.2 model-index: - name: mistral-7b-json-quizz-fine-tuned-trl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-json-quizz-fine-tuned-trl This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 48 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 4.36 | 48 | 0.6321 | ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.2
Viag/vigogne-2-13b-instruct-philosopher-fr_v0
Viag
2023-12-26T22:25:45Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:bofenghuang/vigogne-2-13b-instruct", "base_model:adapter:bofenghuang/vigogne-2-13b-instruct", "region:us" ]
null
2023-12-26T22:23:05Z
--- library_name: peft base_model: bofenghuang/vigogne-2-13b-instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
sheldonzhu/dqn-SpaceInvadersNoFrameskip-v4
sheldonzhu
2023-12-26T22:20:55Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-26T22:20:20Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 672.00 +/- 254.40 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sheldonzhu -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sheldonzhu -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sheldonzhu ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
TheBloke/MixtralOrochi8x7B-GPTQ
TheBloke
2023-12-26T22:16:33Z
27
7
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "uncensored", "high-intelligence", "en", "base_model:smelborp/MixtralOrochi8x7B", "base_model:quantized:smelborp/MixtralOrochi8x7B", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-12-26T19:54:02Z
--- base_model: smelborp/MixtralOrochi8x7B inference: false language: - en license: cc-by-nc-4.0 model_creator: Smelborp Bumblechump model_name: MixtralOrochi8X7B model_type: mixtral prompt_template: '{prompt} ' quantized_by: TheBloke tags: - mixtral - uncensored - high-intelligence --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MixtralOrochi8X7B - GPTQ - Model creator: [Smelborp Bumblechump](https://huggingface.co/smelborp) - Original model: [MixtralOrochi8X7B](https://huggingface.co/smelborp/MixtralOrochi8x7B) <!-- description start --> # Description This repo contains GPTQ model files for [Smelborp Bumblechump's MixtralOrochi8X7B](https://huggingface.co/smelborp/MixtralOrochi8x7B). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MixtralOrochi8x7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MixtralOrochi8x7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MixtralOrochi8x7B-GGUF) * [Smelborp Bumblechump's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/smelborp/MixtralOrochi8x7B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Unknown ``` {prompt} ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/MixtralOrochi8x7B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 23.81 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/MixtralOrochi8x7B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 24.70 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/MixtralOrochi8x7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 27.42 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/MixtralOrochi8x7B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.85 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/MixtralOrochi8x7B-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 21.43 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/MixtralOrochi8x7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 47.04 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/MixtralOrochi8x7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 48.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/MixtralOrochi8x7B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/MixtralOrochi8x7B-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `MixtralOrochi8x7B-GPTQ`: ```shell mkdir MixtralOrochi8x7B-GPTQ huggingface-cli download TheBloke/MixtralOrochi8x7B-GPTQ --local-dir MixtralOrochi8x7B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir MixtralOrochi8x7B-GPTQ huggingface-cli download TheBloke/MixtralOrochi8x7B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir MixtralOrochi8x7B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir MixtralOrochi8x7B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MixtralOrochi8x7B-GPTQ --local-dir MixtralOrochi8x7B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/MixtralOrochi8x7B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/MixtralOrochi8x7B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/MixtralOrochi8x7B-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `MixtralOrochi8x7B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/MixtralOrochi8x7B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''{prompt} ''' client = InferenceClient(endpoint_url) response = client.text_generation( prompt_template, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/MixtralOrochi8x7B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''{prompt} ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Smelborp Bumblechump's MixtralOrochi8X7B # Orochi <img src="https://huggingface.co/smelborp/MixtralOrochi8x7B/resolve/main/orochi.png" width="600" /> ## Overview Orochi is a cutting-edge language model based on the Mixtral architecture developed by Mistral. It represents a sophisticated merge of several prominent models, including Mixtral instruct, Noromaid, OpenBuddy, and several others, using mergekit with the DARE merge method. This model aims to provide highly intelligent responses unrestricted by content limitations. The name "Orochi" references the mythical Yamata-no-Orochi, symbolizing the model's multifaceted and powerful capabilities. ## Goals - **Uncensored Content**: To provide unrestricted and comprehensive responses across various domains. - **High Intelligence**: Leverage the combined knowledge and capabilities of the merged models to deliver insightful and accurate information. - **Innovation in Language Modeling**: Push the boundaries of what's possible in natural language understanding and generation. ## Model Details - **Architecture**: Mixtral, a Mixture of Experts model, underlies Orochi's design, enabling it to specialize and optimize its responses across different tasks and topics. - **Merge Strategy**: Utilizing mergekit and the DARE method, Orochi integrates aspects of various models to enhance its performance and capabilities. ## Usage Due to its uncensored nature, Orochi is best utilized in environments where intelligent, unrestricted dialogue is necessary. Users are encouraged to implement their own content moderation or alignment strategies appropriate for their use case. ## Ethical Considerations As an uncensored model, Orochi may generate content that is unsuitable for all audiences. Users are advised to consider the implications of using such a model and to implement suitable safeguards and ethical guidelines. ## Acknowledgements Orochi is a product of numerous contributions from the fields of machine learning and language modeling. Special thanks to the teams behind Mixtral, mergekit, and all the individual models integrated into Orochi. ---
joshmittal/phi-2-finetuned
joshmittal
2023-12-26T22:06:34Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:finetune:microsoft/phi-2", "license:other", "region:us" ]
null
2023-12-15T14:13:42Z
--- license: other base_model: microsoft/phi-2 tags: - generated_from_trainer model-index: - name: phi-2-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-2-finetuned This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0
iamdanialkamali/Taxi-v3
iamdanialkamali
2023-12-26T21:54:59Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-12-26T21:54:57Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="iamdanialkamali/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
iamdanialkamali/q-FrozenLake-v1-4x4-noSlippery
iamdanialkamali
2023-12-26T21:49:17Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-12-26T21:49:15Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="iamdanialkamali/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jeiku/Taste_Test_3B
jeiku
2023-12-26T21:28:53Z
20
0
transformers
[ "transformers", "safetensors", "gguf", "stablelm_epoch", "text-generation", "conversational", "custom_code", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-26T20:53:48Z
--- license: other language: - en --- Check merge.yml for more information on the creation of this model. This is a test model created for the purpose of trying new merging techniques. I have not thoroughly tested this model, but it should perform as well as the average 3B or better.
andreatorch/Reinforce-Unit7-pocaSoccerTwos
andreatorch
2023-12-26T21:09:27Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-12-26T21:08:08Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: andreatorch/Reinforce-Unit7-pocaSoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
NW-temp/previous-best-my-awesome-setfit-model
NW-temp
2023-12-26T21:00:22Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-28T19:32:08Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # NW-temp/my-awesome-setfit-model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("NW-temp/my-awesome-setfit-model") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
arhamh/q-FrozenLake-v1-4x4-noSlippery
arhamh
2023-12-26T20:54:36Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-12-26T20:54:34Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="arhamh/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
MBZUAI/GLaMM-RegCap-VG
MBZUAI
2023-12-26T20:48:09Z
103
0
transformers
[ "transformers", "pytorch", "llava", "text-generation", "arxiv:2311.03356", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-26T18:21:54Z
--- license: apache-2.0 --- # 👁️ GLaMM-RegCap-VG --- ## 📝 Description GLaMM-RegCap-VG is the model specific to region-level captioning finetuned on Visual Genome. "RegCap-VG" indicates its specialization in region-level captioning with tuning on the Visual Genome dataset. ## 💻 Download To get started with GLaMM-RegCap-VG, follow these steps: ``` git lfs install git clone https://huggingface.co/MBZUAI/GLaMM-RegCap-VG ``` ## 📚 Additional Resources - **Paper:** [ArXiv](https://arxiv.org/abs/2311.03356). - **GitHub Repository:** For training and updates: [GitHub - GLaMM](https://github.com/mbzuai-oryx/groundingLMM). - **Project Page:** For a detailed overview and insights into the project, visit our [Project Page - GLaMM](https://mbzuai-oryx.github.io/groundingLMM/). ## 📜 Citations and Acknowledgments ```bibtex @article{hanoona2023GLaMM, title={GLaMM: Pixel Grounding Large Multimodal Model}, author={Rasheed, Hanoona and Maaz, Muhammad and Shaji, Sahal and Shaker, Abdelrahman and Khan, Salman and Cholakkal, Hisham and Anwer, Rao M. and Xing, Eric and Yang, Ming-Hsuan and Khan, Fahad S.}, journal={ArXiv 2311.03356}, year={2023} }
jbolaifa/Intent_Classification
jbolaifa
2023-12-26T20:38:04Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-26T20:11:18Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: Intent_Classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Intent_Classification This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2925 - Accuracy: 0.9321 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5505 | 1.0 | 626 | 0.4355 | 0.9075 | | 0.2572 | 2.0 | 1252 | 0.3289 | 0.9234 | | 0.1352 | 3.0 | 1878 | 0.3091 | 0.9305 | | 0.0718 | 4.0 | 2504 | 0.2925 | 0.9321 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0
marvelo2506/dqn-SpaceInvadersNoFrameskip-v4
marvelo2506
2023-12-26T20:30:40Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-26T20:30:10Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 475.50 +/- 83.83 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga marvelo2506 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga marvelo2506 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga marvelo2506 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
mikexu123/11221
mikexu123
2023-12-26T20:14:21Z
0
0
null
[ "arxiv:1910.09700", "license:apache-2.0", "region:us" ]
null
2023-12-26T20:12:16Z
--- license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bartowski/neural-chat-7b-v3-3-wizardmath-dare-me-exl2
bartowski
2023-12-26T20:11:50Z
0
0
null
[ "merge", "text-generation", "license:other", "region:us" ]
text-generation
2023-12-26T18:34:11Z
--- license: other license_name: microsoft-research-license license_link: LICENSE tags: - merge quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of neural-chat-7b-v3-3-wizardmath-dare-me Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization. Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Conversion was done using the default calibration dataset. Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6. Original model: https://huggingface.co/SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me <a href="https://huggingface.co/bartowski/neural-chat-7b-v3-3-wizardmath-dare-me-exl2/tree/4_0">4.0 bits per weight</a> <a href="https://huggingface.co/bartowski/neural-chat-7b-v3-3-wizardmath-dare-me-exl2/tree/5_0">5.0 bits per weight</a> <a href="https://huggingface.co/bartowski/neural-chat-7b-v3-3-wizardmath-dare-me-exl2/tree/6_0">6.0 bits per weight</a> <a href="https://huggingface.co/bartowski/neural-chat-7b-v3-3-wizardmath-dare-me-exl2/tree/8_0">8.0 bits per weight</a> ## Download instructions With git: ```shell git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/neural-chat-7b-v3-3-wizardmath-dare-me-exl2 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `neural-chat-7b-v3-3-wizardmath-dare-me-exl2`: ```shell mkdir neural-chat-7b-v3-3-wizardmath-dare-me-exl2 huggingface-cli download bartowski/neural-chat-7b-v3-3-wizardmath-dare-me-exl2 --local-dir neural-chat-7b-v3-3-wizardmath-dare-me-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir neural-chat-7b-v3-3-wizardmath-dare-me-exl2 huggingface-cli download bartowski/neural-chat-7b-v3-3-wizardmath-dare-me-exl2 --revision 4_0 --local-dir neural-chat-7b-v3-3-wizardmath-dare-me-exl2 --local-dir-use-symlinks False ```
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_chatGPT_temp1_Seed102
behzadnet
2023-12-26T19:51:35Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
2023-12-26T19:51:33Z
--- library_name: peft base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
ntc-ai/SDXL-LoRA-slider.overgown-foliage
ntc-ai
2023-12-26T19:49:34Z
30
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2023-12-26T19:49:31Z
--- language: - en thumbnail: "images/evaluate/overgown foliage.../overgown foliage_17_3.0.png" widget: - text: overgown foliage output: url: images/overgown foliage_17_3.0.png - text: overgown foliage output: url: images/overgown foliage_19_3.0.png - text: overgown foliage output: url: images/overgown foliage_20_3.0.png - text: overgown foliage output: url: images/overgown foliage_21_3.0.png - text: overgown foliage output: url: images/overgown foliage_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "overgown foliage" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - overgown foliage (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/overgown foliage_17_-3.0.png" width=256 height=256 /> | <img src="images/overgown foliage_17_0.0.png" width=256 height=256 /> | <img src="images/overgown foliage_17_3.0.png" width=256 height=256 /> | | <img src="images/overgown foliage_19_-3.0.png" width=256 height=256 /> | <img src="images/overgown foliage_19_0.0.png" width=256 height=256 /> | <img src="images/overgown foliage_19_3.0.png" width=256 height=256 /> | | <img src="images/overgown foliage_20_-3.0.png" width=256 height=256 /> | <img src="images/overgown foliage_20_0.0.png" width=256 height=256 /> | <img src="images/overgown foliage_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` overgown foliage ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.overgown-foliage', weight_name='overgown foliage.safetensors', adapter_name="overgown foliage") # Activate the LoRA pipe.set_adapters(["overgown foliage"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, overgown foliage" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 640+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
birdhouse5/textual_inversion_1000_steps
birdhouse5
2023-12-26T19:46:41Z
3
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-26T16:45:06Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - birdhouse5/textual_inversion_naive_artstyle_Ca These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
osiemmasel123/PLKD
osiemmasel123
2023-12-26T19:43:40Z
0
0
null
[ "license:other", "region:us" ]
null
2023-12-26T19:42:56Z
--- license: other license_name: rvcv2 license_link: LICENSE --- <audio controls src="https://cdn-uploads.huggingface.co/production/uploads/658b2c49135580745c38d43c/z44eSUiMGDv36QE2aZkJT.mpga"></audio>
Artanis1551/bert_trainer
Artanis1551
2023-12-26T19:21:50Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-26T08:56:04Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert_trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_trainer This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.5246909856796265 - eval_accuracy: 0.8830541237113402 - eval_runtime: 70.1829 - eval_samples_per_second: 44.227 - eval_steps_per_second: 2.764 - epoch: 3.87 - step: 3000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.308177098205707e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 3000 ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
turboderp/Mixtral-8x7B-exl2
turboderp
2023-12-26T19:15:42Z
1
14
null
[ "region:us" ]
null
2023-12-16T17:49:26Z
EXL2 quants of [Mistral 8x7B v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) Supported in ExLlamaV2 0.0.11 and up [2.40 bits per weight](https://huggingface.co/turboderp/Mixtral-8x7B-exl2/tree/2.4bpw) [2.50 bits per weight](https://huggingface.co/turboderp/Mixtral-8x7B-exl2/tree/2.5bpw) [2.70 bits per weight](https://huggingface.co/turboderp/Mixtral-8x7B-exl2/tree/2.7bpw) [3.00 bits per weight](https://huggingface.co/turboderp/Mixtral-8x7B-exl2/tree/3.0bpw) [3.50 bits per weight](https://huggingface.co/turboderp/Mixtral-8x7B-exl2/tree/3.5bpw) [4.00 bits per weight](https://huggingface.co/turboderp/Mixtral-8x7B-exl2/tree/4.0bpw) [5.00 bits per weight](https://huggingface.co/turboderp/Mixtral-8x7B-exl2/tree/5.0bpw) [6.00 bits per weight](https://huggingface.co/turboderp/Mixtral-8x7B-exl2/tree/6.0bpw) [8.00 bits per weight](https://huggingface.co/turboderp/Mixtral-8x7B-exl2/tree/8.0bpw) [measurement.json](https://huggingface.co/turboderp/Mixtral-8x7B-exl2/blob/main/measurement.json) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/0CE2kq4P5QsMsPFiHuqJJ.png)
mwpt5/t5-mawps-pen
mwpt5
2023-12-26T19:14:39Z
9
1
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-24T08:13:50Z
--- widget: - text: "mwp: N_00 - N_01 <keywordtext> apples bananas shop" example_title: "Univariate 1" - text: "mwp: N_00 + N_01 - N_02 <keywordtext> candy chocolates" example_title: "Univariate 2" - text: "mwp: (N_00 + N_01) / N_02 <keywordtext> music" example_title: "Univariate 3" - text: "mwp: N_00 * N_01 <keywordtext> clothes shirts piles" example_title: "Univariate 4" inference: parameters: temperature: 1.0 min_length: 20 max_length: 256 do_sample: True # top_k: 0.0 top_p: 0.8 repetition_penalty: 1.0 ---
freyagracia/distilbert-base-uncased-finetuned-tweet_pemilu_2
freyagracia
2023-12-26T19:13:16Z
1
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-12-19T09:39:54Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: freyagracia/distilbert-base-uncased-finetuned-tweet_pemilu_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # freyagracia/distilbert-base-uncased-finetuned-tweet_pemilu_2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.3796 - Validation Loss: 2.3447 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -937, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.8444 | 4.5973 | 0 | | 4.5323 | 4.3017 | 1 | | 4.2944 | 4.0347 | 2 | | 4.0319 | 3.8860 | 3 | | 3.8297 | 3.6184 | 4 | | 3.5679 | 3.4363 | 5 | | 3.4085 | 3.2184 | 6 | | 3.2081 | 3.1093 | 7 | | 3.0778 | 2.9026 | 8 | | 2.9089 | 2.7794 | 9 | | 2.8247 | 2.6472 | 10 | | 2.6888 | 2.6064 | 11 | | 2.6255 | 2.5352 | 12 | | 2.4976 | 2.4881 | 13 | | 2.4868 | 2.3892 | 14 | | 2.4120 | 2.3527 | 15 | | 2.3771 | 2.3493 | 16 | | 2.3609 | 2.3212 | 17 | | 2.3714 | 2.3579 | 18 | | 2.3796 | 2.3447 | 19 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.0 - Tokenizers 0.15.0
abdel1311/Reinforce-Pixelcopter-PLE-v0
abdel1311
2023-12-26T19:06:38Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-12-25T16:46:48Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 38.00 +/- 24.52 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
rizvi-rahil786/t5-small-samsum
rizvi-rahil786
2023-12-26T18:57:18Z
3
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-26T18:57:05Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-small-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-samsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7108 - Rouge1: 42.8796 - Rouge2: 19.1218 - Rougel: 35.393 - Rougelsum: 39.3635 - Gen Len: 16.8901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.0185 | 1.0 | 1842 | 1.7918 | 40.569 | 17.0622 | 33.4617 | 37.1907 | 16.8938 | | 1.8881 | 2.0 | 3684 | 1.7479 | 41.9209 | 18.5938 | 34.8969 | 38.5288 | 16.6435 | | 1.8222 | 3.0 | 5526 | 1.7269 | 42.2611 | 19.1114 | 35.3077 | 39.0834 | 17.0696 | | 1.8011 | 4.0 | 7368 | 1.7136 | 42.8138 | 19.2426 | 35.6329 | 39.4298 | 16.9158 | | 1.7812 | 5.0 | 9210 | 1.7108 | 42.8796 | 19.1218 | 35.393 | 39.3635 | 16.8901 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0