modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-15 00:43:56
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
521 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-15 00:40:56
card
stringlengths
11
1.01M
DrishtiSharma/mbart-large-50-en-es-translation-lr-1e-05-weight-decay-0.1
DrishtiSharma
2023-09-02T17:32:29Z
11
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "translation", "generated_from_trainer", "base_model:facebook/mbart-large-50", "base_model:finetune:facebook/mbart-large-50", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-09-02T15:17:32Z
--- license: mit base_model: facebook/mbart-large-50 tags: - translation - generated_from_trainer metrics: - bleu - rouge model-index: - name: mbart-large-50-en-es-translation-lr-1e-05-weight-decay-0.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-50-en-es-translation-lr-1e-05-weight-decay-0.1 This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9532 - Bleu: 45.1551 - Rouge: {'rouge1': 0.707093830119779, 'rouge2': 0.5240989044660875, 'rougeL': 0.6865395711179825, 'rougeLsum': 0.6867643949864491} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:---------------------------------------------------------------------------------------------------------------------------:| | 1.4485 | 1.0 | 4500 | 1.0236 | 42.1586 | {'rouge1': 0.6728104679322686, 'rouge2': 0.4866267759088613, 'rougeL': 0.6507619922873461, 'rougeLsum': 0.6508024989844624} | | 0.8867 | 2.0 | 9000 | 0.9542 | 44.1945 | {'rouge1': 0.6933374960151913, 'rouge2': 0.5090654274262618, 'rougeL': 0.6722360570050694, 'rougeLsum': 0.6723972406375381} | | 0.7112 | 3.0 | 13500 | 0.9408 | 44.9173 | {'rouge1': 0.7047659807760827, 'rouge2': 0.5200169348076622, 'rougeL': 0.6839031690668775, 'rougeLsum': 0.6842067045539153} | | 0.6075 | 4.0 | 18000 | 0.9532 | 45.2020 | {'rouge1': 0.7070170730434684, 'rouge2': 0.5239391023023636, 'rougeL': 0.6863309446860562, 'rougeLsum': 0.6866635686411662} | ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4.dev0 - Tokenizers 0.13.3
bigmorning/whisper_syl_noforce__0030
bigmorning
2023-09-02T17:19:47Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-02T17:19:39Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_syl_noforce__0030 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_syl_noforce__0030 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7408 - Train Accuracy: 0.0305 - Train Wermet: 0.2408 - Validation Loss: 0.9883 - Validation Accuracy: 0.0216 - Validation Wermet: 0.3596 - Epoch: 29 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 5.2961 | 0.0113 | 1.9043 | 3.9402 | 0.0116 | 0.9526 | 0 | | 4.6207 | 0.0121 | 0.8740 | 3.7957 | 0.0120 | 0.9397 | 1 | | 4.4142 | 0.0128 | 0.8473 | 3.6045 | 0.0124 | 0.8988 | 2 | | 4.1915 | 0.0135 | 0.8361 | 3.4445 | 0.0128 | 0.9019 | 3 | | 4.0072 | 0.0140 | 0.8260 | 3.3268 | 0.0131 | 0.8816 | 4 | | 3.8559 | 0.0145 | 0.8084 | 3.2440 | 0.0133 | 0.8592 | 5 | | 3.7359 | 0.0149 | 0.7986 | 3.1751 | 0.0135 | 0.8598 | 6 | | 3.6368 | 0.0152 | 0.7891 | 3.1298 | 0.0136 | 0.8398 | 7 | | 3.5465 | 0.0154 | 0.7775 | 3.0736 | 0.0138 | 0.8606 | 8 | | 3.4710 | 0.0157 | 0.7681 | 3.0318 | 0.0138 | 0.8455 | 9 | | 3.3988 | 0.0159 | 0.7603 | 3.0159 | 0.0139 | 0.8770 | 10 | | 3.3279 | 0.0162 | 0.7504 | 2.9672 | 0.0141 | 0.8241 | 11 | | 3.2611 | 0.0164 | 0.7397 | 2.9541 | 0.0141 | 0.8676 | 12 | | 3.1996 | 0.0167 | 0.7284 | 2.8913 | 0.0144 | 0.7990 | 13 | | 3.1311 | 0.0169 | 0.7162 | 2.8671 | 0.0145 | 0.7934 | 14 | | 3.0590 | 0.0172 | 0.7044 | 2.8241 | 0.0146 | 0.7907 | 15 | | 2.9692 | 0.0177 | 0.6843 | 2.7517 | 0.0149 | 0.7645 | 16 | | 2.8783 | 0.0181 | 0.6630 | 2.6682 | 0.0152 | 0.7263 | 17 | | 2.7622 | 0.0187 | 0.6417 | 2.5586 | 0.0156 | 0.7220 | 18 | | 2.6164 | 0.0194 | 0.6138 | 2.4121 | 0.0161 | 0.6909 | 19 | | 2.4405 | 0.0203 | 0.5838 | 2.2417 | 0.0167 | 0.6527 | 20 | | 2.2404 | 0.0213 | 0.5486 | 2.1401 | 0.0170 | 0.6662 | 21 | | 2.0196 | 0.0225 | 0.5086 | 1.8907 | 0.0180 | 0.5774 | 22 | | 1.7917 | 0.0237 | 0.4665 | 1.7073 | 0.0186 | 0.5446 | 23 | | 1.5286 | 0.0253 | 0.4182 | 1.5139 | 0.0194 | 0.4919 | 24 | | 1.2991 | 0.0267 | 0.3736 | 1.3605 | 0.0200 | 0.4570 | 25 | | 1.1117 | 0.0279 | 0.3336 | 1.2304 | 0.0205 | 0.4262 | 26 | | 0.9643 | 0.0289 | 0.2986 | 1.1387 | 0.0209 | 0.4040 | 27 | | 0.8404 | 0.0298 | 0.2663 | 1.0514 | 0.0213 | 0.3776 | 28 | | 0.7408 | 0.0305 | 0.2408 | 0.9883 | 0.0216 | 0.3596 | 29 | ### Framework versions - Transformers 4.33.0.dev0 - TensorFlow 2.13.0 - Tokenizers 0.13.3
CzarnyRycerz/taxi-v3-q-table
CzarnyRycerz
2023-09-02T17:17:01Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T16:40:46Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-v3-q-table results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="CzarnyRycerz/taxi-v3-q-table", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
The-matt/autumn-shadow-48_220
The-matt
2023-09-02T17:16:20Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T17:16:14Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
PraveenJesu/whisper-medium-96-random-peft-V1-drug_list
PraveenJesu
2023-09-02T17:16:03Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T17:16:02Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
KingKazma/xsum_t5-small_p_tuning_500_10_50000_8_e5_s6789_v4_l4_v100
KingKazma
2023-09-02T17:15:46Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T17:15:42Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
bigmorning/whisper_syl_noforce__0025
bigmorning
2023-09-02T17:06:34Z
59
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-02T17:06:27Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_keras_callback model-index: - name: whisper_syl_noforce__0025 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_syl_noforce__0025 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5286 - Train Accuracy: 0.0253 - Train Wermet: 0.4182 - Validation Loss: 1.5139 - Validation Accuracy: 0.0194 - Validation Wermet: 0.4919 - Epoch: 24 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 5.2961 | 0.0113 | 1.9043 | 3.9402 | 0.0116 | 0.9526 | 0 | | 4.6207 | 0.0121 | 0.8740 | 3.7957 | 0.0120 | 0.9397 | 1 | | 4.4142 | 0.0128 | 0.8473 | 3.6045 | 0.0124 | 0.8988 | 2 | | 4.1915 | 0.0135 | 0.8361 | 3.4445 | 0.0128 | 0.9019 | 3 | | 4.0072 | 0.0140 | 0.8260 | 3.3268 | 0.0131 | 0.8816 | 4 | | 3.8559 | 0.0145 | 0.8084 | 3.2440 | 0.0133 | 0.8592 | 5 | | 3.7359 | 0.0149 | 0.7986 | 3.1751 | 0.0135 | 0.8598 | 6 | | 3.6368 | 0.0152 | 0.7891 | 3.1298 | 0.0136 | 0.8398 | 7 | | 3.5465 | 0.0154 | 0.7775 | 3.0736 | 0.0138 | 0.8606 | 8 | | 3.4710 | 0.0157 | 0.7681 | 3.0318 | 0.0138 | 0.8455 | 9 | | 3.3988 | 0.0159 | 0.7603 | 3.0159 | 0.0139 | 0.8770 | 10 | | 3.3279 | 0.0162 | 0.7504 | 2.9672 | 0.0141 | 0.8241 | 11 | | 3.2611 | 0.0164 | 0.7397 | 2.9541 | 0.0141 | 0.8676 | 12 | | 3.1996 | 0.0167 | 0.7284 | 2.8913 | 0.0144 | 0.7990 | 13 | | 3.1311 | 0.0169 | 0.7162 | 2.8671 | 0.0145 | 0.7934 | 14 | | 3.0590 | 0.0172 | 0.7044 | 2.8241 | 0.0146 | 0.7907 | 15 | | 2.9692 | 0.0177 | 0.6843 | 2.7517 | 0.0149 | 0.7645 | 16 | | 2.8783 | 0.0181 | 0.6630 | 2.6682 | 0.0152 | 0.7263 | 17 | | 2.7622 | 0.0187 | 0.6417 | 2.5586 | 0.0156 | 0.7220 | 18 | | 2.6164 | 0.0194 | 0.6138 | 2.4121 | 0.0161 | 0.6909 | 19 | | 2.4405 | 0.0203 | 0.5838 | 2.2417 | 0.0167 | 0.6527 | 20 | | 2.2404 | 0.0213 | 0.5486 | 2.1401 | 0.0170 | 0.6662 | 21 | | 2.0196 | 0.0225 | 0.5086 | 1.8907 | 0.0180 | 0.5774 | 22 | | 1.7917 | 0.0237 | 0.4665 | 1.7073 | 0.0186 | 0.5446 | 23 | | 1.5286 | 0.0253 | 0.4182 | 1.5139 | 0.0194 | 0.4919 | 24 | ### Framework versions - Transformers 4.33.0.dev0 - TensorFlow 2.13.0 - Tokenizers 0.13.3
The-matt/autumn-shadow-48_210
The-matt
2023-09-02T17:06:12Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T17:06:08Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
leofn3/modelo_racismo
leofn3
2023-09-02T17:01:56Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:PORTULAN/albertina-900m-portuguese-ptbr-encoder-brwac", "base_model:finetune:PORTULAN/albertina-900m-portuguese-ptbr-encoder-brwac", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-18T14:11:56Z
--- license: other base_model: PORTULAN/albertina-ptbr tags: - generated_from_trainer metrics: - accuracy model-index: - name: modelo_racismo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # modelo_racismo This model is a fine-tuned version of [PORTULAN/albertina-ptbr](https://huggingface.co/PORTULAN/albertina-ptbr) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0036 - Accuracy: 0.9989 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 468 | 0.2304 | 0.9583 | | 0.7037 | 2.0 | 936 | 0.0847 | 0.9840 | | 0.256 | 3.0 | 1404 | 0.0075 | 0.9979 | | 0.0759 | 4.0 | 1872 | 0.0036 | 0.9989 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
ukeme/ukay-base-sentence-transformer
ukeme
2023-09-02T17:00:03Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "dataset:embedding-data/sentence-compression", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-09-02T16:41:46Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - embedding-data/sentence-compression --- # ukeme/ukay-base-sentence-transformer This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ukeme/ukay-base-sentence-transformer') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('ukeme/ukay-base-sentence-transformer') model = AutoModel.from_pretrained('ukeme/ukay-base-sentence-transformer') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ukeme/ukay-base-sentence-transformer) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
KingKazma/xsum_t5-small_lora_500_10_50000_8_e5_s6789_v4_l4_r4
KingKazma
2023-09-02T16:59:22Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T16:59:21Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
KingKazma/xsum_t5-small_p_tuning_500_10_50000_8_e4_s6789_v4_l4_v100
KingKazma
2023-09-02T16:45:49Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T16:45:45Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
The-matt/autumn-shadow-48_190
The-matt
2023-09-02T16:43:05Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T16:43:01Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
Kamer/NoFrequentWords
Kamer
2023-09-02T16:38:13Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-02T14:34:27Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: NoFrequentWords results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NoFrequentWords This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 1.5140 - eval_Accuracy: 0.4027 - eval_F1_macro: 0.1427 - eval_F1_class_0: 0.9205 - eval_F1_class_1: 0.6667 - eval_F1_class_2: 0.1782 - eval_F1_class_3: 0.0 - eval_F1_class_4: 0.0 - eval_F1_class_5: 0.0 - eval_F1_class_6: 0.0204 - eval_F1_class_7: 0.0 - eval_F1_class_8: 0.0 - eval_F1_class_9: 0.9070 - eval_F1_class_10: 0.0253 - eval_F1_class_11: 0.0 - eval_F1_class_12: 0.1140 - eval_F1_class_13: 0.0 - eval_F1_class_14: 0.0220 - eval_F1_class_15: 0.0 - eval_F1_class_16: 0.0 - eval_F1_class_17: 0.0 - eval_F1_class_18: 0.0 - eval_F1_class_19: 0.0 - eval_runtime: 17.6645 - eval_samples_per_second: 63.97 - eval_steps_per_second: 4.019 - epoch: 2.92 - step: 9500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.32.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
Naska223/AWPortrait
Naska223
2023-09-02T16:34:20Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-09-02T15:53:39Z
--- license: creativeml-openrail-m ---
CzarnyRycerz/q-FrozenLake-v1-4x4-noSlippery
CzarnyRycerz
2023-09-02T16:34:07Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T16:34:03Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="CzarnyRycerz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
HorcruxNo13/beit-base-patch16-224-pt22k-ft22k
HorcruxNo13
2023-09-02T16:27:17Z
192
0
transformers
[ "transformers", "pytorch", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-base-patch16-224-pt22k-ft22k", "base_model:finetune:microsoft/beit-base-patch16-224-pt22k-ft22k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-02T13:12:22Z
--- license: apache-2.0 base_model: microsoft/beit-base-patch16-224-pt22k-ft22k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: beit-base-patch16-224-pt22k-ft22k results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.6866666666666666 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beit-base-patch16-224-pt22k-ft22k This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6312 - Accuracy: 0.6867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 8 | 3.4268 | 0.275 | | 6.7921 | 2.0 | 16 | 0.6216 | 0.7083 | | 0.7831 | 3.0 | 24 | 0.5972 | 0.7417 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
thegrigorian/ppo-LunarLander-v2
thegrigorian
2023-09-02T16:26:52Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T16:26:33Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 258.11 +/- 19.78 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
raymondowf/flan-t5-large-qlora-financial-phrasebank
raymondowf
2023-09-02T16:21:01Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T16:20:56Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
patientxtr/photon_v1_onnx
patientxtr
2023-09-02T16:12:46Z
12
1
diffusers
[ "diffusers", "onnx", "text-to-image", "license:unknown", "diffusers:OnnxStableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-28T17:42:59Z
--- license: unknown library_name: diffusers pipeline_tag: text-to-image --- Microsoft Olive optimized onnx version of "https://huggingface.co/digiplay/Photon_v1"
KingKazma/xsum_t5-small_lora_500_10_50000_8_e3_s6789_v4_l4_r4
KingKazma
2023-09-02T16:04:17Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T16:04:16Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
asparius/bert-base-combined-large
asparius
2023-09-02T15:58:53Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-26T16:01:47Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: bert-base-combined-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-combined-large This model is a fine-tuned version of [dbmdz/bert-base-turkish-uncased](https://huggingface.co/dbmdz/bert-base-turkish-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3029 - Accuracy: 0.8940 - F1: 0.8956 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2668 | 1.0 | 3077 | 0.2812 | 0.8931 | 0.8915 | | 0.2042 | 2.0 | 6154 | 0.2675 | 0.8952 | 0.8950 | | 0.1453 | 3.0 | 9231 | 0.3029 | 0.8940 | 0.8956 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
The-matt/autumn-shadow-48_130
The-matt
2023-09-02T15:55:52Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:55:47Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
btamm12/roberta-base-finetuned-wls-manual-10ep
btamm12
2023-09-02T15:52:47Z
117
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:50:16Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: roberta-base-finetuned-wls-manual-10ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-wls-manual-10ep This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0599 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8201 | 0.93 | 7 | 1.5286 | | 1.4462 | 2.0 | 15 | 1.3480 | | 1.3032 | 2.93 | 22 | 1.3377 | | 1.2564 | 4.0 | 30 | 1.1907 | | 1.246 | 4.93 | 37 | 1.1702 | | 1.1777 | 6.0 | 45 | 1.1549 | | 1.118 | 6.93 | 52 | 1.0611 | | 1.1339 | 8.0 | 60 | 1.1084 | | 1.1158 | 8.93 | 67 | 1.1376 | | 1.0143 | 9.33 | 70 | 1.1225 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
The-matt/autumn-shadow-48_120
The-matt
2023-09-02T15:49:07Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:49:04Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
norman365/atom-Llama2-chinese-7b-ggml.bin
norman365
2023-09-02T15:47:03Z
0
0
null
[ "zh", "license:apache-2.0", "region:us" ]
null
2023-09-02T15:46:12Z
--- license: apache-2.0 language: - zh ---
kaneki1933/testes
kaneki1933
2023-09-02T15:44:09Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-20T17:55:55Z
--- license: creativeml-openrail-m ---
btamm12/bert-base-uncased-finetuned-wls-manual-9ep-lower
btamm12
2023-09-02T15:42:56Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:40:41Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-wls-manual-9ep-lower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-wls-manual-9ep-lower This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1096 | 0.93 | 7 | 1.9445 | | 1.5963 | 2.0 | 15 | 1.5711 | | 1.4734 | 2.93 | 22 | 1.4391 | | 1.3716 | 4.0 | 30 | 1.4138 | | 1.2719 | 4.93 | 37 | 1.2480 | | 1.2486 | 6.0 | 45 | 1.2483 | | 1.2156 | 6.93 | 52 | 1.2662 | | 1.1523 | 8.0 | 60 | 1.3172 | | 1.1596 | 8.4 | 63 | 1.2467 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
The-matt/autumn-shadow-48_110
The-matt
2023-09-02T15:41:55Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:41:51Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
btamm12/bert-base-cased-finetuned-wls-manual-9ep
btamm12
2023-09-02T15:40:33Z
116
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:38:23Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-base-cased-finetuned-wls-manual-9ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-wls-manual-9ep This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1588 | 0.93 | 7 | 1.8380 | | 1.6343 | 2.0 | 15 | 1.6555 | | 1.6181 | 2.93 | 22 | 1.5436 | | 1.4245 | 4.0 | 30 | 1.4227 | | 1.3525 | 4.93 | 37 | 1.4219 | | 1.2804 | 6.0 | 45 | 1.3093 | | 1.2167 | 6.93 | 52 | 1.2617 | | 1.1662 | 8.0 | 60 | 1.2366 | | 1.1817 | 8.4 | 63 | 1.2008 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
shmart/shmisper-medium-PL
shmart
2023-09-02T15:40:20Z
1
0
transformers
[ "transformers", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-04-11T20:17:19Z
--- license: mit --- # faster-whisper finetuned model for PL Phonetic transcription This model is a result of finetuning `openai/whisper-medium` model on custom PL dataset and then conversion to `faster-whisper` model. In training dataset there were also 5 english speakers and 4 japanese speakers for which polish transcription was manually created. ## About model: - I created this because original whisper model is not doing precise transcription, e.g. some disfluences like stuttering or repetition are normalized. - This model generates more accurate transcriptions so it's better for automatic creation of unsupervised dataset for Text-To-Speech model training. - I noticed it also normalized numbers so it's in word form, there are no digits generated in transcript. - English audio is transcribed into phonetic polish transcription instead of leaving original english form or translating to polish language like it's in original whisper model (however due to low amount of data it was trained on, it's far from perfection) ## Example: ``` from faster_whisper import WhisperModel import huggingface_hub model_path = huggingface_hub.snapshot_download("shmart/shmisper-medium-PL") model = WhisperModel(model_path, device="cuda", compute_type="float16") options = { 'language': "pl", 'beam_size': 5, 'without_timestamps': True, 'suppress_tokens': [], 'log_prob_threshold': None, 'no_speech_threshold': 0.05 } input_wav_path = './audio.wav' result, info = model.transcribe(input_wav_path, **options) text = ' '.join([r.text for r in result]) print(text) ```
rajaswa-postman/es_chat_lora
rajaswa-postman
2023-09-02T15:39:41Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:22:10Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0 - PEFT 0.5.0
haddadalwi/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad2-noAns
haddadalwi
2023-09-02T15:36:53Z
117
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "base_model:google-bert/bert-large-uncased-whole-word-masking-finetuned-squad", "base_model:finetune:google-bert/bert-large-uncased-whole-word-masking-finetuned-squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-09-01T16:30:38Z
--- license: apache-2.0 base_model: bert-large-uncased-whole-word-masking-finetuned-squad tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad2-noAns results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad2-noAns This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 266 | 0.0000 | | 0.0649 | 2.0 | 532 | 0.0000 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
btamm12/bert-base-cased-finetuned-wls-manual-8ep
btamm12
2023-09-02T15:33:27Z
115
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:31:23Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-base-cased-finetuned-wls-manual-8ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-wls-manual-8ep This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1599 | 0.93 | 7 | 1.8488 | | 1.6266 | 2.0 | 15 | 1.6340 | | 1.5518 | 2.93 | 22 | 1.5175 | | 1.382 | 4.0 | 30 | 1.4146 | | 1.3309 | 4.93 | 37 | 1.4054 | | 1.2715 | 6.0 | 45 | 1.3004 | | 1.2182 | 6.93 | 52 | 1.2688 | | 1.1738 | 7.47 | 56 | 1.2962 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
btamm12/roberta-base-finetuned-wls-manual-7ep
btamm12
2023-09-02T15:31:16Z
124
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:28:58Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: roberta-base-finetuned-wls-manual-7ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-wls-manual-7ep This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1744 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8224 | 0.93 | 7 | 1.5284 | | 1.4374 | 2.0 | 15 | 1.3331 | | 1.2988 | 2.93 | 22 | 1.3356 | | 1.2666 | 4.0 | 30 | 1.1919 | | 1.2422 | 4.93 | 37 | 1.1769 | | 1.1804 | 6.0 | 45 | 1.1424 | | 1.1443 | 6.53 | 49 | 1.1581 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
The-matt/autumn-shadow-48_90
The-matt
2023-09-02T15:27:43Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:27:39Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
btamm12/bert-base-cased-finetuned-wls-manual-7ep
btamm12
2023-09-02T15:26:41Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:24:40Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-base-cased-finetuned-wls-manual-7ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-wls-manual-7ep This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2757 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1707 | 0.93 | 7 | 1.9153 | | 1.658 | 2.0 | 15 | 1.6462 | | 1.5689 | 2.93 | 22 | 1.5263 | | 1.4013 | 4.0 | 30 | 1.4385 | | 1.3501 | 4.93 | 37 | 1.4224 | | 1.293 | 6.0 | 45 | 1.3189 | | 1.2473 | 6.53 | 49 | 1.2231 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
Satorio/so-vits-4.1-Nice_Nature
Satorio
2023-09-02T15:22:42Z
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2023-08-06T13:14:51Z
--- license: cc-by-nc-4.0 --- Model: Nice Nature(Umamusume: Pretty Derby) Dataset Source: DMM Umamusume Game Still training to improve model... Maybe better, maybe not...
The-matt/autumn-shadow-48_80
The-matt
2023-09-02T15:21:01Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:20:51Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
crewdon/AICategoryMapping-multilingual-e5-small
crewdon
2023-09-02T15:20:57Z
14
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-09-02T15:05:10Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # AICategoryMapping-multilingual-e5-small This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 94 with parameters: ``` {'batch_size': 400} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 40, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 376, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
btamm12/bert-base-uncased-finetuned-wls-manual-6ep-lower
btamm12
2023-09-02T15:20:25Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:18:28Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-wls-manual-6ep-lower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-wls-manual-6ep-lower This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3314 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1123 | 0.93 | 7 | 1.9531 | | 1.6034 | 2.0 | 15 | 1.5832 | | 1.489 | 2.93 | 22 | 1.4553 | | 1.3975 | 4.0 | 30 | 1.4448 | | 1.3074 | 4.93 | 37 | 1.2918 | | 1.3083 | 5.6 | 42 | 1.4088 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
btamm12/bert-base-uncased-finetuned-wls-manual-5ep-lower
btamm12
2023-09-02T15:14:00Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:12:03Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-wls-manual-5ep-lower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-wls-manual-5ep-lower This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4858 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1142 | 0.93 | 7 | 1.9585 | | 1.6082 | 2.0 | 15 | 1.5910 | | 1.4973 | 2.93 | 22 | 1.4644 | | 1.4145 | 4.0 | 30 | 1.4717 | | 1.335 | 4.67 | 35 | 1.4035 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
The-matt/autumn-shadow-48_70
The-matt
2023-09-02T15:13:29Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:13:13Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
btamm12/roberta-base-finetuned-wls-manual-4ep
btamm12
2023-09-02T15:09:55Z
123
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:07:08Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: roberta-base-finetuned-wls-manual-4ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-wls-manual-4ep This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2987 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8232 | 0.93 | 7 | 1.5217 | | 1.4594 | 2.0 | 15 | 1.4173 | | 1.402 | 2.93 | 22 | 1.3668 | | 1.3193 | 3.73 | 28 | 1.2170 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
KingKazma/xsum_t5-small_lora_500_10_50000_8_e1_s6789_v4_l4_r4
KingKazma
2023-09-02T15:09:11Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:09:10Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
DrishtiSharma/mbart-large-50-en-es-translation-lr-1e-05-weight-decay-0.001
DrishtiSharma
2023-09-02T15:04:08Z
9
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "translation", "generated_from_trainer", "base_model:facebook/mbart-large-50", "base_model:finetune:facebook/mbart-large-50", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-09-02T12:48:56Z
--- license: mit base_model: facebook/mbart-large-50 tags: - translation - generated_from_trainer metrics: - bleu - rouge model-index: - name: mbart-large-50-en-es-translation-lr-1e-05-weight-decay-0.001 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-50-en-es-translation-lr-1e-05-weight-decay-0.001 This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9549 - Bleu: 45.0307 - Rouge: {'rouge1': 0.7049318825090395, 'rouge2': 0.5238048751750992, 'rougeL': 0.684187379601513, 'rougeLsum': 0.6843574853855577} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:----------------------------------------------------------------------------------------------------------------------------:| | 1.4627 | 1.0 | 4500 | 1.0255 | 42.1880 | {'rouge1': 0.6725633216905762, 'rouge2': 0.48605402524493657, 'rougeL': 0.6498853764470456, 'rougeLsum': 0.6501981166312041} | | 0.8878 | 2.0 | 9000 | 0.9572 | 44.1734 | {'rouge1': 0.6912686406245903, 'rouge2': 0.5093695171345348, 'rougeL': 0.6701896043455414, 'rougeLsum': 0.6703473419504804} | | 0.7125 | 3.0 | 13500 | 0.9414 | 44.8709 | {'rouge1': 0.7051197958532004, 'rouge2': 0.5210482863677958, 'rougeL': 0.6843075431636916, 'rougeLsum': 0.6846265298079588} | | 0.6092 | 4.0 | 18000 | 0.9549 | 45.0821 | {'rouge1': 0.7047932899349161, 'rouge2': 0.523739339466653, 'rougeL': 0.6840127607742443, 'rougeLsum': 0.684202100852132} | ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4.dev0 - Tokenizers 0.13.3
btamm12/roberta-base-finetuned-wls-manual-3ep
btamm12
2023-09-02T15:01:54Z
129
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T14:59:09Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: roberta-base-finetuned-wls-manual-3ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-wls-manual-3ep This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3361 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8156 | 0.93 | 7 | 1.5116 | | 1.4371 | 2.0 | 15 | 1.3472 | | 1.3218 | 2.8 | 21 | 1.3278 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
yaohuacn/a2c-PandaPickAndPlace-v3
yaohuacn
2023-09-02T15:00:35Z
3
0
stable-baselines3
[ "stable-baselines3", "PandaPickAndPlace-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T14:45:56Z
--- library_name: stable-baselines3 tags: - PandaPickAndPlace-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaPickAndPlace-v3 type: PandaPickAndPlace-v3 metrics: - type: mean_reward value: -50.00 +/- 0.00 name: mean_reward verified: false --- # **A2C** Agent playing **PandaPickAndPlace-v3** This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
tsukemono/japanese-stablelm-base-alpha-7b-qlora-marisa
tsukemono
2023-09-02T14:58:35Z
0
0
null
[ "ja", "region:us" ]
null
2023-08-28T08:24:30Z
--- language: - ja --- ## モデルの概略 霧雨魔理沙とおしゃべりできるモデルです。 [Japanese-StableLM-Base-Alpha-7B](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b)のLoRAデータになります ## 使い方 推論のさせかたの一例をhow_to_use.ipynbに記しましたので参考にしていただけると幸いです。 「ユーザー: hogehoge\n魔理沙: 」といったプロンプトを与えてあげることで、魔理沙とおしゃべりができるようになります。 ## 備考 これは東方Projectの二次創作です --- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0
btamm12/roberta-base-finetuned-wls-manual-2ep
btamm12
2023-09-02T14:53:53Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T14:51:11Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: roberta-base-finetuned-wls-manual-2ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-wls-manual-2ep This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3944 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8161 | 0.93 | 7 | 1.5123 | | 1.4497 | 1.87 | 14 | 1.3929 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
nightdude/config_821
nightdude
2023-09-02T14:53:38Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T14:52:34Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
btamm12/bert-base-cased-finetuned-wls-manual-2ep
btamm12
2023-09-02T14:48:32Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T14:46:11Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-base-cased-finetuned-wls-manual-2ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-wls-manual-2ep This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6386 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1651 | 0.93 | 7 | 1.8869 | | 1.6819 | 1.87 | 14 | 1.7442 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
The-matt/autumn-shadow-48_30
The-matt
2023-09-02T14:45:31Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T14:45:15Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
DrYond3r/OrelsanV1
DrYond3r
2023-09-02T14:44:10Z
0
0
null
[ "arxiv:1910.09700", "license:openrail", "region:us" ]
null
2023-08-30T07:07:50Z
--- license: openrail --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
btamm12/bert-base-cased-finetuned-wls-manual-1ep
btamm12
2023-09-02T14:42:09Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T14:40:23Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-base-cased-finetuned-wls-manual-1ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-wls-manual-1ep This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8675 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1332 | 0.93 | 7 | 1.9236 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
foxxy-hm/wav2vec2-base-finetune-vi-v2
foxxy-hm
2023-09-02T14:41:30Z
25
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-01T13:15:22Z
--- license: cc-by-nc-4.0 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-base-finetune-vi-v2 results: [] widget: - example_title: SOICT 2023 - SLU public test 1 src: >- https://huggingface.co/foxxy-hm/wav2vec2-base-finetune-vi/raw/main/audio-test/055R7BruAa333g9teFfamQH.wav - example_title: SOICT 2023 - SLU public test 2 src: >- https://huggingface.co/foxxy-hm/wav2vec2-base-finetune-vi/raw/main/audio-test/0BLHhoJexE8THB8BrsZxWbh.wav - example_title: SOICT 2023 - SLU public test 3 src: >- https://huggingface.co/foxxy-hm/wav2vec2-base-finetune-vi/raw/main/audio-test/1ArUTGWJQ9YALH2xaNhU6GV.wav --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetune-vi-v2 This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2188 - Wer: 0.1391 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 24 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.3873 | 0.67 | 500 | 2.4321 | 0.9719 | | 1.4812 | 1.34 | 1000 | 0.5449 | 0.3062 | | 0.7731 | 2.0 | 1500 | 0.3793 | 0.2263 | | 0.542 | 2.67 | 2000 | 0.3021 | 0.2002 | | 0.4461 | 3.34 | 2500 | 0.2905 | 0.1862 | | 0.4175 | 4.01 | 3000 | 0.2687 | 0.1771 | | 0.3878 | 4.67 | 3500 | 0.2958 | 0.1751 | | 0.3373 | 5.34 | 4000 | 0.2713 | 0.1721 | | 0.3046 | 6.01 | 4500 | 0.2505 | 0.1616 | | 0.2933 | 6.68 | 5000 | 0.2561 | 0.1611 | | 0.285 | 7.34 | 5500 | 0.2405 | 0.1617 | | 0.2998 | 8.01 | 6000 | 0.2363 | 0.1578 | | 0.2486 | 8.68 | 6500 | 0.2254 | 0.1570 | | 0.2682 | 9.35 | 7000 | 0.2306 | 0.1547 | | 0.2327 | 10.01 | 7500 | 0.2289 | 0.1537 | | 0.2141 | 10.68 | 8000 | 0.2383 | 0.1499 | | 0.2124 | 11.35 | 8500 | 0.2261 | 0.15 | | 0.2156 | 12.02 | 9000 | 0.2142 | 0.1511 | | 0.2082 | 12.68 | 9500 | 0.2386 | 0.1467 | | 0.1814 | 13.35 | 10000 | 0.2301 | 0.1448 | | 0.1836 | 14.02 | 10500 | 0.2302 | 0.1446 | | 0.18 | 14.69 | 11000 | 0.2244 | 0.1445 | | 0.1756 | 15.35 | 11500 | 0.2280 | 0.1439 | | 0.1693 | 16.02 | 12000 | 0.2307 | 0.1426 | | 0.1588 | 16.69 | 12500 | 0.2164 | 0.1422 | | 0.1587 | 17.36 | 13000 | 0.2198 | 0.1417 | | 0.1738 | 18.02 | 13500 | 0.2282 | 0.1411 | | 0.1524 | 18.69 | 14000 | 0.2274 | 0.1394 | | 0.1569 | 19.36 | 14500 | 0.2178 | 0.1396 | | 0.1433 | 20.03 | 15000 | 0.2200 | 0.1413 | | 0.1512 | 20.69 | 15500 | 0.2193 | 0.1382 | | 0.1375 | 21.36 | 16000 | 0.2174 | 0.1393 | | 0.1302 | 22.03 | 16500 | 0.2246 | 0.1391 | | 0.146 | 22.7 | 17000 | 0.2222 | 0.1392 | | 0.1265 | 23.36 | 17500 | 0.2188 | 0.1391 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
Campqt/ppo-LunarLander-v2-unit8
Campqt
2023-09-02T14:39:07Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T14:24:15Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -78.14 +/- 80.44 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 500000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Campqt/ppo-LunarLander-v2-unit8' 'batch_size': 512 'minibatch_size': 128} ```
rrozb/Reinforce-1
rrozb
2023-09-02T14:36:41Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T14:36:31Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Lenouche/Sblerky
Lenouche
2023-09-02T14:30:42Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-13T23:01:35Z
--- license: openrail language: - fr ---
Lenouche/Conkerax
Lenouche
2023-09-02T14:30:03Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-13T22:13:05Z
--- license: openrail language : - fr ---
Lenouche/GiaTechAndGaming
Lenouche
2023-09-02T14:28:46Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-17T01:44:54Z
--- language: - fr license: openrail ---
Zevin2023/MoC-IQA
Zevin2023
2023-09-02T14:28:05Z
0
0
null
[ "aa", "license:openrail", "region:us" ]
null
2023-09-02T14:02:17Z
--- license: openrail language: - aa metrics: - accuracy ---
Lenouche/TevIciJapon
Lenouche
2023-09-02T14:27:59Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-17T18:47:02Z
--- language: - fr license: openrail ---
Lenouche/LouisSan
Lenouche
2023-09-02T14:27:01Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-27T00:10:33Z
--- language: - fr license: openrail ---
Lenouche/BenjaminCode
Lenouche
2023-09-02T14:26:29Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-09-02T00:06:50Z
--- language: - fr license: openrail ---
CzarnyRycerz/ppo-Huggy
CzarnyRycerz
2023-09-02T14:16:53Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-09-02T14:16:42Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: CzarnyRycerz/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
VinayHajare/ppo-LunarLander-v2
VinayHajare
2023-09-02T13:51:21Z
5
3
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T06:37:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.26 +/- 19.25 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) ```python # !pip gymnasium huggingface-sb3 stable_baselines3[extra] import gymnasium as gym from huggingface_sb3 import load_from_hub from stable_baselines3 import PPO from stable_baselines3.common.vec_env import DummyVecEnv from stable_baselines3.common.evaluation import evaluate_policy from stable_baselines3.common.monitor import Monitor repo_id = "VinayHajare/ppo-LunarLander-v2" filename = "ppo-LunarLander-v2.zip" eval_env = gym.make("LunarLander-v2", render_mode="human") checkpoint = load_from_hub(repo_id, filename) model = PPO.load(checkpoint,print_system_info=True) mean_reward, std_reward = evaluate_policy(model,eval_env, n_eval_episodes=10, deterministic=True) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") # Enjoy trained agent observation, info = eval_env.reset() for _ in range(1000): action, _states = model.predict(observation, deterministic=True) observation, rewards, terminated, truncated, info = eval_env.step(action) eval_env.render() ```
DrishtiSharma/mbart-large-50-en-es-translation-lr-1e-05-weight-decay-0.0
DrishtiSharma
2023-09-02T13:50:23Z
13
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "translation", "generated_from_trainer", "base_model:facebook/mbart-large-50", "base_model:finetune:facebook/mbart-large-50", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-09-02T10:31:15Z
--- license: mit base_model: facebook/mbart-large-50 tags: - translation - generated_from_trainer metrics: - bleu - rouge model-index: - name: mbart-large-50-en-es-translation-lr-1e-05-weight-decay-0.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-50-en-es-translation-lr-1e-05-weight-decay-0.0 This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9549 - Bleu: 45.0307 - Rouge: {'rouge1': 0.7049318825090395, 'rouge2': 0.5238048751750992, 'rougeL': 0.684187379601513, 'rougeLsum': 0.6843574853855577} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:----------------------------------------------------------------------------------------------------------------------------:| | 1.4627 | 1.0 | 4500 | 1.0255 | 42.1880 | {'rouge1': 0.6725633216905762, 'rouge2': 0.48605402524493657, 'rougeL': 0.6498853764470456, 'rougeLsum': 0.6501981166312041} | | 0.8878 | 2.0 | 9000 | 0.9572 | 44.1734 | {'rouge1': 0.6912686406245903, 'rouge2': 0.5093695171345348, 'rougeL': 0.6701896043455414, 'rougeLsum': 0.6703473419504804} | | 0.7125 | 3.0 | 13500 | 0.9414 | 44.8709 | {'rouge1': 0.7051197958532004, 'rouge2': 0.5210482863677958, 'rougeL': 0.6843075431636916, 'rougeLsum': 0.6846265298079588} | | 0.6092 | 4.0 | 18000 | 0.9549 | 45.0821 | {'rouge1': 0.7047932899349161, 'rouge2': 0.523739339466653, 'rougeL': 0.6840127607742443, 'rougeLsum': 0.684202100852132} | ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4.dev0 - Tokenizers 0.13.3
mohammadhossein/bert-base-uncased-riddle-finetuned
mohammadhossein
2023-09-02T13:42:47Z
104
0
transformers
[ "transformers", "pytorch", "bert", "multiple-choice", "mhs", "generated_from_trainer", "en", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2023-09-02T13:38:38Z
--- language: - en license: apache-2.0 base_model: bert-base-uncased tags: - mhs - generated_from_trainer metrics: - accuracy model-index: - name: bert_base_uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_base_uncased This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the sentence_puzzle dataset. It achieves the following results on the evaluation set: - Loss: 0.0932 - Accuracy: 0.9365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 36 | 0.2763 | 0.9048 | | No log | 2.0 | 72 | 0.2388 | 0.9206 | | No log | 3.0 | 108 | 0.2465 | 0.9206 | | No log | 4.0 | 144 | 0.0958 | 0.9206 | | No log | 5.0 | 180 | 0.0932 | 0.9365 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
ckandemir/xlm-roberta-base-finetuned-panx-de
ckandemir
2023-09-02T13:28:15Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-02T08:51:32Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.en split: validation args: PAN-X.en metrics: - name: F1 type: f1 value: 0.6993243243243242 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3902 - F1: 0.6993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1085 | 1.0 | 50 | 0.5687 | 0.5579 | | 0.5001 | 2.0 | 100 | 0.4186 | 0.6781 | | 0.3535 | 3.0 | 150 | 0.3902 | 0.6993 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
Kamer/NoDuplicates
Kamer
2023-09-02T13:27:46Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-01T16:09:33Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: NoDuplicates results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NoDuplicates This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4279 - Accuracy: 0.9128 - F1 Macro: 0.8384 - F1 Class 0: 0.9406 - F1 Class 1: 0.3333 - F1 Class 2: 0.9127 - F1 Class 3: 0.6471 - F1 Class 4: 0.8254 - F1 Class 5: 0.8293 - F1 Class 6: 0.8767 - F1 Class 7: 0.7606 - F1 Class 8: 0.7500 - F1 Class 9: 0.9878 - F1 Class 10: 0.9444 - F1 Class 11: 0.9630 - F1 Class 12: 0.9265 - F1 Class 13: 0.8980 - F1 Class 14: 0.8444 - F1 Class 15: 0.8132 - F1 Class 16: 0.7778 - F1 Class 17: 0.9651 - F1 Class 18: 0.9574 - F1 Class 19: 0.8148 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Class 0 | F1 Class 1 | F1 Class 2 | F1 Class 3 | F1 Class 4 | F1 Class 5 | F1 Class 6 | F1 Class 7 | F1 Class 8 | F1 Class 9 | F1 Class 10 | F1 Class 11 | F1 Class 12 | F1 Class 13 | F1 Class 14 | F1 Class 15 | F1 Class 16 | F1 Class 17 | F1 Class 18 | F1 Class 19 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:| | 1.4862 | 0.27 | 300 | 0.8201 | 0.7845 | 0.4484 | 0.8675 | 0.0 | 0.8627 | 0.0 | 0.6733 | 0.0 | 0.6627 | 0.0 | 0.0 | 0.9862 | 0.1935 | 0.9600 | 0.8299 | 0.0833 | 0.2353 | 0.24 | 0.0400 | 0.8852 | 0.9451 | 0.5033 | | 0.7269 | 0.53 | 600 | 0.5951 | 0.8491 | 0.6504 | 0.9048 | 0.0 | 0.8567 | 0.0 | 0.7596 | 0.6111 | 0.6887 | 0.0 | 0.0 | 0.9877 | 0.8033 | 0.9286 | 0.8798 | 0.9167 | 0.74 | 0.6857 | 0.5823 | 0.9506 | 0.9485 | 0.7640 | | 0.5429 | 0.8 | 900 | 0.5375 | 0.8637 | 0.7086 | 0.8904 | 0.0 | 0.8589 | 0.0 | 0.7254 | 0.7805 | 0.8215 | 0.6769 | 0.0 | 0.9877 | 0.7833 | 1.0 | 0.9022 | 0.9130 | 0.7912 | 0.7733 | 0.7048 | 0.9032 | 0.9474 | 0.7119 | | 0.4594 | 1.06 | 1200 | 0.5110 | 0.8805 | 0.7113 | 0.9099 | 0.0 | 0.8925 | 0.0 | 0.7706 | 0.7391 | 0.8139 | 0.4091 | 0.0 | 0.9908 | 0.8785 | 1.0 | 0.8983 | 0.8936 | 0.8090 | 0.7556 | 0.7907 | 0.9529 | 0.9574 | 0.7647 | | 0.3484 | 1.33 | 1500 | 0.4679 | 0.8951 | 0.7667 | 0.9180 | 0.0 | 0.9080 | 0.6957 | 0.8 | 0.7619 | 0.8299 | 0.6875 | 0.0 | 0.9908 | 0.8909 | 1.0 | 0.9196 | 0.9130 | 0.8172 | 0.7865 | 0.7527 | 0.9398 | 0.9474 | 0.7755 | | 0.3744 | 1.59 | 1800 | 0.4359 | 0.8951 | 0.7774 | 0.9290 | 0.0 | 0.8815 | 0.8462 | 0.8049 | 0.7805 | 0.8449 | 0.7059 | 0.0 | 0.9908 | 0.9346 | 1.0 | 0.9143 | 0.8980 | 0.8387 | 0.7475 | 0.7179 | 0.9647 | 0.9583 | 0.7895 | | 0.3514 | 1.86 | 2100 | 0.5161 | 0.8903 | 0.7592 | 0.9109 | 0.0 | 0.8973 | 0.6429 | 0.7603 | 0.7907 | 0.8571 | 0.7077 | 0.0 | 0.9908 | 0.9346 | 1.0 | 0.8971 | 0.8936 | 0.7042 | 0.7324 | 0.7857 | 0.9595 | 0.9574 | 0.7609 | | 0.3111 | 2.12 | 2400 | 0.4327 | 0.9080 | 0.8027 | 0.9283 | 0.3333 | 0.9141 | 0.7407 | 0.8207 | 0.8095 | 0.8622 | 0.7606 | 0.0 | 0.9908 | 0.9298 | 0.9630 | 0.9215 | 0.9167 | 0.8041 | 0.8 | 0.8132 | 0.9651 | 0.9574 | 0.8224 | | 0.2088 | 2.39 | 2700 | 0.4356 | 0.9128 | 0.8452 | 0.9386 | 0.3333 | 0.9058 | 0.8462 | 0.8265 | 0.8 | 0.8562 | 0.7429 | 0.7500 | 0.9893 | 0.9346 | 0.9630 | 0.9322 | 0.8936 | 0.8205 | 0.8372 | 0.7765 | 0.9651 | 0.9574 | 0.8350 | | 0.2317 | 2.65 | 3000 | 0.4294 | 0.9137 | 0.8217 | 0.9365 | 0.3333 | 0.9102 | 0.625 | 0.8243 | 0.8293 | 0.875 | 0.8056 | 0.3333 | 0.9893 | 0.9444 | 0.9630 | 0.9284 | 0.8980 | 0.8478 | 0.8471 | 0.7816 | 0.9651 | 0.9574 | 0.8400 | | 0.1816 | 2.92 | 3300 | 0.4279 | 0.9128 | 0.8384 | 0.9406 | 0.3333 | 0.9127 | 0.6471 | 0.8254 | 0.8293 | 0.8767 | 0.7606 | 0.7500 | 0.9878 | 0.9444 | 0.9630 | 0.9265 | 0.8980 | 0.8444 | 0.8132 | 0.7778 | 0.9651 | 0.9574 | 0.8148 | ### Framework versions - Transformers 4.32.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
LiChenYi/QA
LiChenYi
2023-09-02T13:05:16Z
0
0
null
[ "license:unknown", "region:us" ]
null
2023-09-02T12:55:15Z
--- license: unknown --- 在AI使用过程中,遇到的问题进行记录,供后来者避坑 # 2colab 使用过程的问题 1. 在colab中拉去 huggingface仓库中的数据报如下错误: Connecting to [huggingface.co](http://huggingface.co/) ([huggingface.co](http://huggingface.co/))|18.239.50.16|:443... connected. HTTP request sent, awaiting response... 401 Unauthorized 解决方案: 找到huggingface设置,用户的访问请求【User Access requests】:设置为禁用
quantumaikr/KoreanLM-3B
quantumaikr
2023-09-02T12:55:53Z
109
1
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "korean", "foundation", "ko", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-21T09:02:18Z
--- language: - ko - en pipeline_tag: text-generation tags: - llama - korean - foundation --- <p align="center" width="100%"> <img src="https://i.imgur.com/snFDU0P.png" alt="KoreanLM icon" style="width: 500px; display: block; margin: auto; border-radius: 10%;"> </p> # KoreanLM: 한국어 언어모델 프로젝트 KoreanLM은 한국어 언어모델을 개발하기 위한 오픈소스 프로젝트입니다. 현재 대부분의 언어모델들은 영어에 초점을 맞추고 있어, 한국어에 대한 학습이 상대적으로 부족하고 토큰화 과정에서 비효율적인 경우가 있습니다. 이러한 문제를 해결하고 한국어에 최적화된 언어모델을 제공하기 위해 KoreanLM 프로젝트를 시작하게 되었습니다. ## 프로젝트 목표 1. 한국어에 특화된 언어모델 개발: 한국어의 문법, 어휘, 문화적 특성을 반영하여 한국어를 더 정확하게 이해하고 생성할 수 있는 언어모델을 개발합니다. 2. 효율적인 토큰화 방식 도입: 한국어 텍스트의 토큰화 과정에서 효율적이고 정확한 분석이 가능한 새로운 토큰화 방식을 도입하여 언어모델의 성능을 향상시킵니다. 3. 거대 언어모델의 사용성 개선: 현재 거대한 사이즈의 언어모델들은 기업이 자사의 데이터를 파인튜닝하기 어려운 문제가 있습니다. 이를 해결하기 위해 한국어 언어모델의 크기를 조절하여 사용성을 개선하고, 자연어 처리 작업에 더 쉽게 적용할 수 있도록 합니다. ## 사용 방법 다음은 transformers 라이브러리를 통해 모델과 토크나이저를 로딩하는 예제입니다. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained("quantumaikr/KoreanLM-3B") tokenizer = transformers.AutoTokenizer.from_pretrained("quantumaikr/KoreanLM-3B") ``` ## 기술 문의 [email protected] www.quantumai.kr
HorcruxNo13/swinv2-small-patch4-window8-256-finetuned-eurosat
HorcruxNo13
2023-09-02T12:44:00Z
146
0
transformers
[ "transformers", "pytorch", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-small-patch4-window8-256", "base_model:finetune:microsoft/swinv2-small-patch4-window8-256", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-02T12:25:25Z
--- license: apache-2.0 base_model: microsoft/swinv2-small-patch4-window8-256 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swinv2-small-patch4-window8-256-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.7333333333333333 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-small-patch4-window8-256-finetuned-eurosat This model is a fine-tuned version of [microsoft/swinv2-small-patch4-window8-256](https://huggingface.co/microsoft/swinv2-small-patch4-window8-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5868 - Accuracy: 0.7333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 8 | 1.1951 | 0.2667 | | 5.0901 | 2.0 | 16 | 1.4301 | 0.7333 | | 2.785 | 3.0 | 24 | 1.1514 | 0.2667 | | 0.8599 | 4.0 | 32 | 0.5810 | 0.7333 | | 0.6058 | 5.0 | 40 | 0.5868 | 0.7333 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
simlamkr1/llama2-simtestmodel1
simlamkr1
2023-09-02T12:32:06Z
0
0
peft
[ "peft", "pytorch", "llama", "region:us" ]
null
2023-09-01T13:56:00Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
penguinman73/xlm-roberta-base-finetuned-panx-en
penguinman73
2023-09-02T12:25:02Z
103
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-02T12:22:08Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4028 - F1: 0.6831 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1353 | 1.0 | 50 | 0.6267 | 0.5068 | | 0.5283 | 2.0 | 100 | 0.4369 | 0.6552 | | 0.358 | 3.0 | 150 | 0.4028 | 0.6831 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
NiscR/Reinforce-Pixel1
NiscR
2023-09-02T12:19:12Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T11:35:10Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixel1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 31.20 +/- 23.29 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
penguinman73/xlm-roberta-base-finetuned-panx-fr
penguinman73
2023-09-02T12:18:32Z
124
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-02T12:13:41Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2760 - F1: 0.8452 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5839 | 1.0 | 191 | 0.3623 | 0.7527 | | 0.2607 | 2.0 | 382 | 0.2836 | 0.8238 | | 0.1745 | 3.0 | 573 | 0.2760 | 0.8452 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
penguinman73/xlm-roberta-base-finetuned-panx-de-fr
penguinman73
2023-09-02T12:12:18Z
114
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-02T11:58:38Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1623 - F1: 0.8603 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2891 | 1.0 | 715 | 0.1813 | 0.8232 | | 0.1482 | 2.0 | 1430 | 0.1586 | 0.8462 | | 0.0959 | 3.0 | 2145 | 0.1623 | 0.8603 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
darthruebezahl/alicia02092023
darthruebezahl
2023-09-02T12:09:23Z
29
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-02T12:07:42Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: Alicia02092023 --- ### Alicia02092023 Dreambooth model trained by darthruebezahl with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: Alicia02092023 (use that on your prompt) ![Alicia02092023 0](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%281%29.jpg)![Alicia02092023 1](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%282%29.jpg)![Alicia02092023 2](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%283%29.jpg)![Alicia02092023 3](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%284%29.jpg)![Alicia02092023 4](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%285%29.jpg)![Alicia02092023 5](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%286%29.jpg)![Alicia02092023 6](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%287%29.jpg)![Alicia02092023 7](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%288%29.jpg)![Alicia02092023 8](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%289%29.jpg)![Alicia02092023 9](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2810%29.jpg)![Alicia02092023 10](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2811%29.jpg)![Alicia02092023 11](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2812%29.jpg)![Alicia02092023 12](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2813%29.jpg)![Alicia02092023 13](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2814%29.jpg)![Alicia02092023 14](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2815%29.jpg)![Alicia02092023 15](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2816%29.jpg)![Alicia02092023 16](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2817%29.jpg)![Alicia02092023 17](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2818%29.jpg)![Alicia02092023 18](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2819%29.jpg)![Alicia02092023 19](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2820%29.jpg)
inkoziev/chargpt-96M
inkoziev
2023-09-02T12:08:27Z
146
3
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "causal-lm", "ru", "license:openrail", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-15T11:18:43Z
--- license: openrail language: - ru library_name: transformers tags: - pytorch - causal-lm --- ## CharGPT-96M Это крошечная языковая модель с **посимвольной** токенизацией для всевозможных экспериментов, когда задача решается плохо из-за BPE токенизации на слова и их части. Подробное описание и примеры использования можно посмотреть в карточке модели [charllama-35M](https://huggingface.co/inkoziev/charllama-35M).
fkc294/xlm-roberta-base-finetuned-panx-de
fkc294
2023-09-02T11:56:53Z
124
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-02T11:06:08Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: validation args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8646808510638297 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1361 - F1: 0.8647 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2595 | 1.0 | 525 | 0.1540 | 0.8302 | | 0.1265 | 2.0 | 1050 | 0.1493 | 0.8468 | | 0.0806 | 3.0 | 1575 | 0.1361 | 0.8647 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
penguinman73/xlm-roberta-base-finetuned-panx-de
penguinman73
2023-09-02T11:56:10Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-27T01:35:12Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2992 - F1: 0.8285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6098 | 1.0 | 167 | 0.3570 | 0.7592 | | 0.2633 | 2.0 | 334 | 0.2995 | 0.8171 | | 0.1792 | 3.0 | 501 | 0.2992 | 0.8285 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
y22ma/sdxl-dabble-model
y22ma
2023-09-02T11:46:15Z
4
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-09-01T14:12:19Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-xl-base-1.0 dataset: y22ma/Dabble-interior-captions tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers inference: true --- # Text-to-image finetuning - y22ma/sdxl-dabble-model This pipeline was finetuned from **stabilityai/stable-diffusion-xl-base-1.0** on the **y22ma/Dabble-interior-captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a beautiful living room: ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) Special VAE used for training: None.
amgodbole/bloom_prompt_tuning_1693653323.8270018
amgodbole
2023-09-02T11:36:37Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-02T11:36:36Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
softaken/softaken-dbx-to-pst-converter
softaken
2023-09-02T11:35:00Z
0
0
null
[ "region:us" ]
null
2023-09-02T11:18:55Z
Softaken DBX to PST Converter Software is a convenient computer program to export Outlook Express emails to Outlook PST file format. There are Users can export single and multiple DBX files and folders to Outlook PST file format. No need for any technical knowledge to operate this software, and convert DBX files to PST file format. Users can export unlimited DBX file conversion without any data limitation. The conversion tool provides a complete preview of the DBX file before the beginning of the conversion process. Users can export DBX files into multiple other world-famous file formats such as; PST, EML, EMLX, MSG, MBOX, etc. The software can also work with multiple MS Outlook versions such as; 2002, 2003, 2007, 2010, 2013, 2016, and 2019. Users can save their exported data as per the required location on the desktop. This is Windows-based tool that can work with all Windows systems such as; Windows 11, Windows 10 S, Windows 10, Windows 8/8.1, Windows 7, Windows Vista, Windows XP, and Windows 2000, etc. Grab the free demo version of this software to learn more features and functions of the software. Read More: https://www.softaken.com/dbx-to-pst-converter
casque/FilmVelvia3
casque
2023-09-02T11:34:13Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-09-02T11:32:49Z
--- license: creativeml-openrail-m ---
Mustain/line_fujiki3
Mustain
2023-09-02T11:20:10Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T11:20:04Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
dwitidibyajyoti/fine_tune_layoutmlv3_model
dwitidibyajyoti
2023-09-02T11:15:36Z
77
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "token-classification", "generated_from_trainer", "base_model:microsoft/layoutlmv3-base", "base_model:finetune:microsoft/layoutlmv3-base", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-30T09:45:10Z
--- license: cc-by-nc-sa-4.0 base_model: microsoft/layoutlmv3-base tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2763 - Precision: 0.5109 - Recall: 0.6026 - F1: 0.5529 - Accuracy: 0.9222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 8.33 | 100 | 0.6800 | 0.3371 | 0.3846 | 0.3593 | 0.7682 | | No log | 16.67 | 200 | 0.3088 | 0.5204 | 0.6538 | 0.5795 | 0.9156 | | No log | 25.0 | 300 | 0.2142 | 0.5326 | 0.6282 | 0.5765 | 0.9305 | | No log | 33.33 | 400 | 0.2301 | 0.5795 | 0.6538 | 0.6145 | 0.9288 | | 0.4115 | 41.67 | 500 | 0.2426 | 0.5618 | 0.6410 | 0.5988 | 0.9272 | | 0.4115 | 50.0 | 600 | 0.4171 | 0.6190 | 0.6667 | 0.6420 | 0.8924 | | 0.4115 | 58.33 | 700 | 0.2265 | 0.5393 | 0.6154 | 0.5749 | 0.9371 | | 0.4115 | 66.67 | 800 | 0.2869 | 0.5506 | 0.6282 | 0.5868 | 0.9156 | | 0.4115 | 75.0 | 900 | 0.2633 | 0.5568 | 0.6282 | 0.5904 | 0.9272 | | 0.0231 | 83.33 | 1000 | 0.2763 | 0.5109 | 0.6026 | 0.5529 | 0.9222 | ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
yaohuacn/a2c-PandaReachDense-v3
yaohuacn
2023-09-02T11:10:11Z
2
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T11:05:12Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.19 +/- 0.08 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
madroid/onnx-whisper
madroid
2023-09-02T11:02:02Z
0
0
null
[ "onnx", "whisper", "openai", "license:apache-2.0", "region:us" ]
null
2023-09-02T07:14:04Z
--- license: apache-2.0 tags: - whisper - onnx - openai ---
casque/majicmixRealistic_betterV2V25
casque
2023-09-02T11:00:36Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-09-02T10:43:18Z
--- license: creativeml-openrail-m ---
JanSt/gbert-base-finetuned-twitter
JanSt
2023-09-02T10:57:40Z
8
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "base_model:deepset/gbert-base", "base_model:finetune:deepset/gbert-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-08-24T10:58:07Z
--- license: mit base_model: deepset/gbert-base tags: - generated_from_trainer model-index: - name: gbert-base-finetuned-twitter results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gbert-base-finetuned-twitter This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7380 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 192 - eval_batch_size: 192 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.194 | 1.0 | 4180 | 1.9622 | | 2.0075 | 2.0 | 8360 | 1.8813 | | 1.9429 | 3.0 | 12540 | 1.8339 | | 1.8985 | 4.0 | 16720 | 1.8057 | | 1.8676 | 5.0 | 20900 | 1.7801 | | 1.8446 | 6.0 | 25080 | 1.7793 | | 1.829 | 7.0 | 29260 | 1.7580 | | 1.815 | 8.0 | 33440 | 1.7445 | | 1.8048 | 9.0 | 37620 | 1.7319 | | 1.7997 | 10.0 | 41800 | 1.7331 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
andrewcho92/helloworld
andrewcho92
2023-09-02T10:33:10Z
0
0
null
[ "text-generation", "en", "license:openrail", "region:us" ]
text-generation
2023-09-02T10:14:37Z
--- license: openrail language: - en pipeline_tag: text-generation ---
adimazuz/texi-v3
adimazuz
2023-09-02T10:30:56Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T10:30:54Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: texi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="adimazuz/texi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jigglesaw/finetuning-sentiment-model-3000-samples
jigglesaw
2023-09-02T10:16:22Z
106
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-02T08:56:24Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8666666666666667 - name: F1 type: f1 value: 0.870967741935484 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3394 - Accuracy: 0.8667 - F1: 0.8710 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
gg4ever/trOCR-final
gg4ever
2023-09-02T10:15:40Z
126
0
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "image-to-text", "ko", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-to-text
2023-08-22T11:31:10Z
--- license: apache-2.0 language: - ko metrics: - cer - wer pipeline_tag: image-to-text --- # trOCR-final fine-tuned for VisionEncoderDecoderModel(encoder , decoder) encoder = 'facebook/deit-base-distilled-patch16-384' decoder = 'klue/roberta-base' ## How to Get Started with the Model ```python from transformers import VisionEncoderDecoderModel,AutoTokenizer, TrOCRProcessor import torch from PIL import Image device = torch.device('cuda') # change 'cuda' if you need. image_path='(your image path)' image = Image.open(image_path) #model can be .jpg or .png #hugging face download: https://huggingface.co/gg4ever/trOCR-final processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") trocr_model = "gg4ever/trOCR-final" model = VisionEncoderDecoderModel.from_pretrained(trocr_model).to(device) tokenizer = AutoTokenizer.from_pretrained(trocr_model) pixel_values = (processor(image, return_tensors="pt").pixel_values).to(device) generated_ids = model.generate(pixel_values) generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(generated_text) ``` ## Training Details ### Training Data 1M words generated by TextRecognitionDataGenerator(trdg) : https://github.com/Belval/TextRecognitionDataGenerator/blob/master/trdg/run.py 1.1M words from AI-hub OCR words dataset : https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=81 ### Training Hyperparameters |hyperparameters|values| |-----------------------------|-------| |predict_with_generate|True| |evaluation_strategy|"steps"| |per_device_train_batch_size|32| |per_device_eval_batch_size|32| |num_train_epochs|2| |fp16|True| |learning_rate|4e-5| |eval_stept|10000| |warmup_steps|20000| |weight_decay|0.01|
muralee491/murale
muralee491
2023-09-02T10:14:33Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T10:12:40Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
KhalfounMehdi/mura_vit_224
KhalfounMehdi
2023-09-02T10:01:11Z
192
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "autotrain", "dataset:KhalfounMehdi/mura_dataset_processed_224px_train_val", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-02T06:30:20Z
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace datasets: - KhalfounMehdi/mura_dataset_processed_224px_train_val metrics: - accuracy --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metrics {'accuracy': 0.7795551112221945, 'recall': 0.9037098791162984, 'precision': 0.7690670450514366, 'f1': 0.83096972019931, 'total_time_in_seconds': 81.18831510400014, 'samples_per_second': 49.28049060846776, 'latency_in_seconds': 0.020292005774556397}
nichelia/example100
nichelia
2023-09-02T09:40:53Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T09:40:51Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0