modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
huggingtweets/20pointsbot-apesahoy-chai_ste-deepfanfiction-nsp_gpt2-pldroneoperated
huggingtweets
2022-08-12T19:43:06Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-08-12T19:41:45Z
--- language: en thumbnail: http://www.huggingtweets.com/20pointsbot-apesahoy-chai_ste-deepfanfiction-nsp_gpt2-pldroneoperated/1660333381797/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1556081004699435010/Qvh20nyO_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1196519479364268034/5QpniWSP_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1479595267800322048/Aqqb82wz_400x400.png&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">20 Points Ahead Bot & Humongous Ape MP & ste 🍊 & Deep Fanfiction & Ninja Sex Party but AI & PLDroneOperated</div> <div style="text-align: center; font-size: 14px;">@20pointsbot-apesahoy-chai_ste-deepfanfiction-nsp_gpt2-pldroneoperated</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 20 Points Ahead Bot & Humongous Ape MP & ste 🍊 & Deep Fanfiction & Ninja Sex Party but AI & PLDroneOperated. | Data | 20 Points Ahead Bot | Humongous Ape MP | ste 🍊 | Deep Fanfiction | Ninja Sex Party but AI | PLDroneOperated | | --- | --- | --- | --- | --- | --- | --- | | Tweets downloaded | 317 | 3247 | 3191 | 244 | 692 | 55 | | Retweets | 0 | 200 | 291 | 1 | 13 | 0 | | Short tweets | 0 | 609 | 486 | 0 | 44 | 0 | | Tweets kept | 317 | 2438 | 2414 | 243 | 635 | 55 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/175dtqp0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @20pointsbot-apesahoy-chai_ste-deepfanfiction-nsp_gpt2-pldroneoperated's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/48ei9wzp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/48ei9wzp/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/20pointsbot-apesahoy-chai_ste-deepfanfiction-nsp_gpt2-pldroneoperated') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Ammar-alhaj-ali/arabic-MARBERT-news-article-classification
Ammar-alhaj-ali
2022-08-12T19:39:14Z
224
3
transformers
[ "transformers", "pytorch", "bert", "text-classification", "text classification", "news", "ar", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-12T13:55:59Z
--- language: - ar widget: - text: "أخطرت شركة أرامكو السعودية 4 على الأقل من المشترين في شمال آسيا بأنها ستورد إليهم الكميات المتعاقد عليها من النفط الخام كاملة في سبتمبرأيلول المقبل. وقالت مصادر مطلعة لرويترز إن السعودية، أكبر مصدر النفط في العالم، كانت قد رفعت سعر البيع الرسمي للمشترين الآسيويين إلى مستويات قياسية لذلك الشهر." - text: "يرى المحلل العسكري والإستراتيجي اعياد الطوفان أن أحد أسباب الحرب الروسية الأوكرانية هو أن الولايات المتحدة تريد أن تقاتل كما يقال حتى آخر جندي أوكراني بمعنى أن واشنطن تسعى لاستنزاف الروس وكشف أسلحتهم السرية والإستراتيجية من دون أن تتحمل أي خسائر على الأرض." tags: - text classification - news --- ## Arabic MARBERT News Article Classification Model #### Model description **arabic-MARBERT-news-article-classification Model** is a news article classification model that was built by fine-tuning the [MARBERT](https://huggingface.co/UBC-NLP/MARBERT) model. For the fine-tuning, I used [SANAD: Single-Label Arabic News Articles Dataset](https://data.mendeley.com/datasets/57zpx667y9) that includes 7 labels(Culture,Finance,Medical,Politics,Religion,Sports,and Tech). #### How to use To use the model with a transformers pipeline: ```python >>>from transformers import pipeline >>>model = pipeline('text-classification', model='Ammar-alhaj-ali/arabic-MARBERT-news-article-classification') >>>sentences = ['أخطرت شركة أرامكو السعودية 4 على الأقل من المشترين في شمال آسيا بأنها ستورد إليهم الكميات المتعاقد عليها من النفط الخام كاملة في سبتمبرأيلول المقبل. وقالت مصادر مطلعة لرويترز إن السعودية، أكبر مصدر النفط في العالم، كانت قد رفعت سعر البيع الرسمي للمشترين الآسيويين إلى مستويات قياسية لذلك الشهر.'] >>>model(sentences) [{'label': 'Finance', 'score': 0.9998553991317749}] ```
mdround/ppo-LunarLander-v2
mdround
2022-08-12T18:54:01Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-08-12T18:53:31Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 242.35 +/- 19.99 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
alkzar90/croupier-creature-classifier
alkzar90
2022-08-12T18:20:14Z
57
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-08-02T05:24:16Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer datasets: - imagefolder widget: - src: https://huggingface.co/alkzar90/croupier-creature-classifier/resolve/main/examples/crusader_peco_peco.png example_title: Crusader-Rangarok-Online - src: https://huggingface.co/alkzar90/croupier-creature-classifier/resolve/main/examples/goblin_wow.png example_title: Goblin-WoW - src: https://huggingface.co/alkzar90/croupier-creature-classifier/resolve/main/examples/dobby_harry_potter.jpg example_title: Dobby-Harry-Potter - src: https://huggingface.co/alkzar90/croupier-creature-classifier/resolve/main/examples/resident_evil_nemesis.jpeg example_title: Nemesis-Resident-Evil metrics: - accuracy model-index: - name: croupier-creature-classifier results: - task: name: Image Classification type: image-classification dataset: name: croupier-mtg-dataset type: imagefolder config: alkzar90--croupier-mtg-dataset split: train args: alkzar90--croupier-mtg-dataset metrics: - name: Accuracy type: accuracy value: 0.7471264367816092 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # croupier-creature-classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the croupier-mtg-dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.7583 - Accuracy: 0.7471 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6663 | 1.1 | 100 | 1.0179 | 0.5941 | | 0.4924 | 2.2 | 200 | 0.7036 | 0.7529 | | 0.4552 | 3.3 | 300 | 0.6123 | 0.7824 | | 0.2355 | 4.4 | 400 | 0.5748 | 0.7647 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
butchland/q-FrozenLake-v1-4x4-noSlippery
butchland
2022-08-12T17:56:51Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-08-09T15:35:38Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="butchland/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
aks234/pegasus-samsum
aks234
2022-08-12T17:54:34Z
11
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-09T18:03:41Z
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0 - Datasets 2.0.0 - Tokenizers 0.10.3
bdokmeci/ppo-LunarLander-v2
bdokmeci
2022-08-12T16:55:42Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-08-12T16:55:04Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 173.53 +/- 59.72 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mariolinml/roberta_large-chunking_0812_v0
mariolinml
2022-08-12T16:54:10Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T16:09:06Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta_large-chunking_0812_v0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta_large-chunking_0812_v0 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3382 - Precision: 0.8195 - Recall: 0.8350 - F1: 0.8272 - Accuracy: 0.9106 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1816 | 1.0 | 1249 | 0.3544 | 0.7910 | 0.8154 | 0.8030 | 0.9055 | | 0.0719 | 2.0 | 2498 | 0.4084 | 0.8207 | 0.8316 | 0.8261 | 0.9141 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
cm-mueller/BACnet-Klassifizierung-Heizungstechnik
cm-mueller
2022-08-12T16:14:07Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-12T13:14:46Z
--- license: mit tags: - generated_from_trainer metrics: - f1 language: - de model-index: - name: BACnet-Klassifizierung-Heizungstechnik-bert-base-german-cased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BACnet-Klassifizierung-Heizungstechnik-bert-base-german-cased This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the [gart-labor](https://huggingface.co/gart-labor) "klassifizierung_heizung_v2" dataset. It achieves the following results on the evaluation set: - Loss: 0.0798 - F1: [1. 1. 0.99 0.94117647 1. 0.90909091 1. 1. 1. 1. ] ## Model description This model makes it possible to classify the heating technology components described with the BACnet standard into different categories. The model is based on a German-language data set. ## Intended uses & limitations The model divides descriptive texts into the following categories of heating technology: CHP, District heating, Heating_circuit_consumer, Heating_system_general, Boiler, Circuit_generator(feeder), Buffer storage, Hot water preparation, Heat pump, Heat exchanger, Heat exchanger and Meter. ## Training and evaluation data The model is based on a German-language data set. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:----------------------------------------------------------------------------------------------------------------:| | 0.0281 | 0.9 | 8 | 0.0798 | [1. 1. 0.99 0.94117647 1. 0.90909091 1. 1. 1. 1. ] | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
cm-mueller/BACnet-Klassifizierung-Kaeltettechnik
cm-mueller
2022-08-12T16:13:50Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-12T13:22:59Z
--- license: mit tags: - generated_from_trainer metrics: - f1 language: - de model-index: - name: BACnet-Klassifizierung-Kaeltettechnik-bert-base-german-cased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BACnet-Klassifizierung-Kaeltettechnik-bert-base-german-cased This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the [gart-labor](https://huggingface.co/gart-labor) "klassifizierung_kaelte_v2" dataset. It achieves the following results on the evaluation set: - Loss: 0.0466 - F1: [0.85714286 0.98507463 1. 1. ] ## Model description This model makes it possible to classify the refrigeration components described with the BACnet standard into different categories. The model is based on a German-language data set. ## Intended uses & limitations The model divides descriptive texts into the following refrigeration categories: Free_Cooling, Refrigeration_General, Chiller, Cold Storage and Recooling Plant ## Training and evaluation data The model is based on a German-language data set. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------:| | 0.0426 | 0.85 | 5 | 0.0439 | [0.85714286 0.98507463 1. 1. ] | | 0.0175 | 1.85 | 10 | 0.0466 | [0.85714286 0.98507463 1. 1. ] | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
cm-mueller/BACnet-Klassifizierung-Sanitaertechnik
cm-mueller
2022-08-12T16:13:33Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-12T10:39:09Z
--- license: mit tags: - generated_from_trainer metrics: - f1 language: - de model-index: - name: BACnet-Klassifizierung-Sanitaertechnik-bert-base-german-cased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BACnet-Klassifizierung-Sanitaertechnik-bert-base-german-cased This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the [gart-labor](https://huggingface.co/gart-labor) "klassifizierung_sanitaer_v2" dataset. It achieves the following results on the evaluation set: - Loss: 0.0039 - F1: [1. 1. 1.] ## Model description This model makes it possible to classify the sanitary technology components described with the BACnet standard into different categories. The model is based on a German-language data set. ## Intended uses & limitations The model divides descriptive texts into the following sanitary engineering categories: Other, pressure boosting system, softening system, lifting system, sanitary_general, waste water, drinking water heating system and water meter. ## Training and evaluation data The model is based on a German-language data set. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:----------:| | 0.0507 | 1.0 | 1 | 0.1080 | [1. 1. 1.] | | 0.0547 | 2.0 | 2 | 0.0589 | [1. 1. 1.] | | 0.0407 | 3.0 | 3 | 0.0427 | [1. 1. 1.] | | 0.0294 | 4.0 | 4 | 0.0465 | [1. 1. 1.] | | 0.0284 | 5.0 | 5 | 0.0291 | [1. 1. 1.] | | 0.0208 | 6.0 | 6 | 0.0232 | [1. 1. 1.] | | 0.0171 | 7.0 | 7 | 0.0198 | [1. 1. 1.] | | 0.0153 | 8.0 | 8 | 0.0170 | [1. 1. 1.] | | 0.0134 | 9.0 | 9 | 0.0144 | [1. 1. 1.] | | 0.0126 | 10.0 | 10 | 0.0124 | [1. 1. 1.] | | 0.0108 | 11.0 | 11 | 0.0109 | [1. 1. 1.] | | 0.0096 | 12.0 | 12 | 0.0098 | [1. 1. 1.] | | 0.0084 | 13.0 | 13 | 0.0089 | [1. 1. 1.] | | 0.0082 | 14.0 | 14 | 0.0083 | [1. 1. 1.] | | 0.0071 | 15.0 | 15 | 0.0077 | [1. 1. 1.] | | 0.0068 | 16.0 | 16 | 0.0073 | [1. 1. 1.] | | 0.0064 | 17.0 | 17 | 0.0069 | [1. 1. 1.] | | 0.0059 | 18.0 | 18 | 0.0065 | [1. 1. 1.] | | 0.0053 | 19.0 | 19 | 0.0061 | [1. 1. 1.] | | 0.0052 | 20.0 | 20 | 0.0058 | [1. 1. 1.] | | 0.005 | 21.0 | 21 | 0.0056 | [1. 1. 1.] | | 0.0047 | 22.0 | 22 | 0.0053 | [1. 1. 1.] | | 0.0044 | 23.0 | 23 | 0.0051 | [1. 1. 1.] | | 0.0042 | 24.0 | 24 | 0.0050 | [1. 1. 1.] | | 0.0043 | 25.0 | 25 | 0.0048 | [1. 1. 1.] | | 0.004 | 26.0 | 26 | 0.0047 | [1. 1. 1.] | | 0.004 | 27.0 | 27 | 0.0045 | [1. 1. 1.] | | 0.004 | 28.0 | 28 | 0.0044 | [1. 1. 1.] | | 0.0037 | 29.0 | 29 | 0.0044 | [1. 1. 1.] | | 0.0037 | 30.0 | 30 | 0.0043 | [1. 1. 1.] | | 0.0037 | 31.0 | 31 | 0.0042 | [1. 1. 1.] | | 0.0035 | 32.0 | 32 | 0.0042 | [1. 1. 1.] | | 0.0036 | 33.0 | 33 | 0.0041 | [1. 1. 1.] | | 0.0035 | 34.0 | 34 | 0.0041 | [1. 1. 1.] | | 0.0037 | 35.0 | 35 | 0.0040 | [1. 1. 1.] | | 0.0034 | 36.0 | 36 | 0.0040 | [1. 1. 1.] | | 0.0033 | 37.0 | 37 | 0.0040 | [1. 1. 1.] | | 0.0034 | 38.0 | 38 | 0.0040 | [1. 1. 1.] | | 0.0034 | 39.0 | 39 | 0.0040 | [1. 1. 1.] | | 0.0034 | 40.0 | 40 | 0.0039 | [1. 1. 1.] | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Saraswati/ppo-PixelCopter
Saraswati
2022-08-12T16:03:16Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-08-11T02:03:02Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: ppo-PixelCopter results: - metrics: - type: mean_reward value: 18.70 +/- 15.13 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support
DTAI-KULeuven
2022-08-12T14:38:55Z
14
0
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "Tweets", "Sentiment analysis", "multilingual", "nl", "fr", "en", "arxiv:2104.09947", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - multilingual - nl - fr - en tags: - Tweets - Sentiment analysis widget: - text: "I really wish I could leave my house after midnight, this makes no sense!" --- # Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using Multilingual BERT [Blog post »](https://people.cs.kuleuven.be/~pieter.delobelle/attitudes-towards-covid-19-measures/?utm_source=huggingface&utm_medium=social&utm_campaign=corona_tweets) · [paper »](http://arxiv.org/abs/2104.09947) This model can be used to determine if a tweet expresses support or not for a curfew. The model was trained on manually labeled tweets from Belgium in Dutch, French and English. We categorized several months worth of these Tweets by topic (government COVID measure) and opinion expressed. Below is a timeline of the relative number of Tweets on the curfew topic (middle) and the fraction of those Tweets that find the curfew too strict, too loose, or a suitable measure (bottom), with the number of daily cases in Belgium to give context on the pandemic situation (top). ![chart.png](https://github.com/iPieter/bert-corona-tweets/raw/master/chart.png) Models used in this paper are on HuggingFace: - https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support - https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-topics
moro23/wav2vec-large-xls-r-300-ha-colab_2
moro23
2022-08-12T14:10:40Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_10_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-08-12T09:14:16Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_10_0 model-index: - name: wav2vec-large-xls-r-300-ha-colab_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec-large-xls-r-300-ha-colab_2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4473 - Wer: 0.4392 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0168 | 5.19 | 400 | 0.4473 | 0.4392 | | 0.0167 | 10.39 | 800 | 0.4473 | 0.4392 | | 0.0166 | 15.58 | 1200 | 0.4473 | 0.4392 | | 0.0172 | 20.77 | 1600 | 0.4473 | 0.4392 | | 0.0166 | 25.97 | 2000 | 0.4473 | 0.4392 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 2.3.2 - Tokenizers 0.10.3
AkashKhamkar/InSumT510k
AkashKhamkar
2022-08-12T13:09:54Z
7
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "license:afl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-10T11:27:49Z
--- license: afl-3.0 --- --- About : This model can be used for text summarization. The dataset on which it was fine tuned consisted of 10,323 articles. The Data Fields : - "Headline" : title of the article - "articleBody" : the main article content - "source" : the link to the readmore page. The data splits were : - Train : 8258. - Vaildation : 2065. ### How to use along with pipeline ```python from transformers import pipeline from transformers import AutoTokenizer, AutoModelForSeq2Seq tokenizer = AutoTokenizer.from_pretrained("AkashKhamkar/InSumT510k") model = AutoModelForSeq2SeqLM.from_pretrained("AkashKhamkar/InSumT510k") summarizer = pipeline("summarization", model=model, tokenizer=tokenizer) summarizer("Text for summarization...", min_length=5, max_length=50) ``` language: - English library_name: Pytorch tags: - Summarization - T5-base - Conditional Modelling -
fxmarty/levit-256-onnx
fxmarty
2022-08-12T11:38:35Z
23
2
transformers
[ "transformers", "onnx", "levit", "image-classification", "vision", "dataset:imagenet-1k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-08-12T08:52:23Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k --- This model is a fork of [facebook/levit-256](https://huggingface.co/facebook/levit-256), where: * `nn.BatchNorm2d` and `nn.Conv2d` are fused * `nn.BatchNorm1d` and `nn.Linear` are fused and the optimized model is converted to the onnx format. The fusion of layers leverages torch.fx, using the transformations `FuseBatchNorm2dInConv2d` and `FuseBatchNorm1dInLinear` soon to be available to use out-of-the-box with 🤗 Optimum, check it out: https://huggingface.co/docs/optimum/main/en/fx/optimization#the-transformation-guide . ## How to use ```python from optimum.onnxruntime.modeling_ort import ORTModelForImageClassification from transformers import AutoFeatureExtractor from PIL import Image import requests preprocessor = AutoFeatureExtractor.from_pretrained("fxmarty/levit-256-onnx") ort_model = ORTModelForImageClassification.from_pretrained("fxmarty/levit-256-onnx") url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) inputs = preprocessor(images=image, return_tensors="pt") outputs = model(**inputs) predicted_class_idx = outputs.logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` To be safe, check as well that the onnx model returns the same logits as the PyTorch model: ```python from optimum.onnxruntime.modeling_ort import ORTModelForImageClassification from transformers import AutoModelForImageClassification pt_model = AutoModelForImageClassification.from_pretrained("facebook/levit-256") pt_model.eval() ort_model = ORTModelForImageClassification.from_pretrained("fxmarty/levit-256-onnx") inp = {"pixel_values": torch.rand(1, 3, 224, 224)} with torch.no_grad(): res = pt_model(**inp) res_ort = ort_model(**inp) assert torch.allclose(res.logits, res_ort.logits, atol=1e-4) ``` ## Benchmarking More than x2 throughput with batch normalization folding and onnxruntime 🔥 Below you can find latency percentiles and mean (in ms), and the models throughput (in iterations/s). ``` PyTorch runtime: {'latency_50': 22.3024695, 'latency_90': 23.1230725, 'latency_95': 23.2653985, 'latency_99': 23.60095705, 'latency_999': 23.865580469999998, 'latency_mean': 22.442956878923766, 'latency_std': 0.46544295612971265, 'nb_forwards': 446, 'throughput': 44.6} Optimum-onnxruntime runtime: {'latency_50': 9.302445, 'latency_90': 9.782875, 'latency_95': 9.9071944, 'latency_99': 11.084606999999997, 'latency_999': 12.035858692000001, 'latency_mean': 9.357703552853133, 'latency_std': 0.4018553286992142, 'nb_forwards': 1069, 'throughput': 106.9} ``` Run on your own machine with: ```python from optimum.runs_base import TimeBenchmark from pprint import pprint time_benchmark_ort = TimeBenchmark( model=ort_model, batch_size=1, input_length=224, model_input_names={"pixel_values"}, warmup_runs=10, duration=10 ) results_ort = time_benchmark_ort.execute() with torch.no_grad(): time_benchmark_pt = TimeBenchmark( model=pt_model, batch_size=1, input_length=224, model_input_names={"pixel_values"}, warmup_runs=10, duration=10 ) results_pt = time_benchmark_pt.execute() print("PyTorch runtime:\n") pprint(results_pt) print("\nOptimum-onnxruntime runtime:\n") pprint(results_ort) ```
dquisi/story_spanish_gpt2_v2
dquisi
2022-08-12T11:22:03Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-08-11T21:11:10Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: story_spanish_gpt2_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # story_spanish_gpt2_v2 This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7640 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.9931 | 1.0 | 1758 | 3.8301 | | 3.7483 | 2.0 | 3516 | 3.7771 | | 3.6494 | 3.0 | 5274 | 3.7640 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Gausstein26/wav2vec2-base-50k
Gausstein26
2022-08-12T09:34:12Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-08-12T00:57:47Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-base-50k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-50k This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 3.5640 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 10.7005 | 0.48 | 300 | 5.3021 | 1.0 | | 3.9938 | 0.96 | 600 | 3.4997 | 1.0 | | 3.591 | 1.44 | 900 | 3.5641 | 1.0 | | 3.6168 | 1.92 | 1200 | 3.5641 | 1.0 | | 3.6252 | 2.4 | 1500 | 3.5641 | 1.0 | | 3.6137 | 2.88 | 1800 | 3.5641 | 1.0 | | 3.6124 | 3.36 | 2100 | 3.5641 | 1.0 | | 3.6171 | 3.84 | 2400 | 3.5641 | 1.0 | | 3.6436 | 4.32 | 2700 | 3.5641 | 1.0 | | 3.6189 | 4.8 | 3000 | 3.5640 | 1.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Jiqing/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad
Jiqing
2022-08-12T09:24:10Z
11
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-08-12T09:22:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
DOOGLAK/Article_250v8_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-12T09:22:05Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article250v8_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T09:16:55Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article250v8_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_250v8_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article250v8_wikigold_split type: article250v8_wikigold_split args: default metrics: - name: Precision type: precision value: 0.4215600350569676 - name: Recall type: recall value: 0.3990597345132743 - name: F1 type: f1 value: 0.4100014206563432 - name: Accuracy type: accuracy value: 0.878173617797598 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_250v8_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article250v8_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3329 - Precision: 0.4216 - Recall: 0.3991 - F1: 0.4100 - Accuracy: 0.8782 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 28 | 0.5293 | 0.1767 | 0.0454 | 0.0722 | 0.7988 | | No log | 2.0 | 56 | 0.3589 | 0.3246 | 0.2987 | 0.3111 | 0.8611 | | No log | 3.0 | 84 | 0.3329 | 0.4216 | 0.3991 | 0.4100 | 0.8782 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
rhiga/q-Taxi-v3
rhiga
2022-08-12T07:50:11Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-08-12T07:50:03Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.50 +/- 2.72 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="rhiga/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
rhiga/q-FrozenLake-v1-4x4-noSlippery
rhiga
2022-08-12T07:43:51Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-08-12T07:43:45Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="rhiga/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Lvxue/distilled-mt5-small-0.005-0.5
Lvxue
2022-08-12T07:19:53Z
7
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "generated_from_trainer", "en", "ro", "dataset:wmt16", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-12T06:09:48Z
--- language: - en - ro license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: distilled-mt5-small-0.005-0.5 results: - task: name: Translation type: translation dataset: name: wmt16 ro-en type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 7.642 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilled-mt5-small-0.005-0.5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 2.8309 - Bleu: 7.642 - Gen Len: 44.9085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
susank/distilbert-base-uncased-finetuned-emotion
susank
2022-08-12T05:45:28Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-12T05:33:23Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.9240247841894665 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2281 - Accuracy: 0.924 - F1: 0.9240 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8687 | 1.0 | 250 | 0.3390 | 0.9015 | 0.8984 | | 0.2645 | 2.0 | 500 | 0.2281 | 0.924 | 0.9240 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.12.0+cu113 - Datasets 2.0.0 - Tokenizers 0.10.3
mariolinml/roberta_large-chunking_0811_v7
mariolinml
2022-08-12T05:06:05Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T03:57:43Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta_large-chunking_0811_v7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta_large-chunking_0811_v7 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3687 - Precision: 0.8237 - Recall: 0.8406 - F1: 0.8320 - Accuracy: 0.9134 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1929 | 1.0 | 1249 | 0.4165 | 0.8034 | 0.8191 | 0.8112 | 0.9047 | | 0.0789 | 2.0 | 2498 | 0.4161 | 0.8262 | 0.8363 | 0.8312 | 0.9088 | | 0.0319 | 3.0 | 3747 | 0.5684 | 0.8104 | 0.8380 | 0.8240 | 0.9037 | | 0.0198 | 4.0 | 4996 | 0.6959 | 0.8237 | 0.8433 | 0.8334 | 0.9067 | | 0.0098 | 5.0 | 6245 | 0.7280 | 0.8234 | 0.8453 | 0.8342 | 0.9084 | | 0.0075 | 6.0 | 7494 | 0.7482 | 0.8259 | 0.8482 | 0.8369 | 0.9075 | | 0.0041 | 7.0 | 8743 | 0.7807 | 0.8396 | 0.8527 | 0.8461 | 0.9113 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
carted-nlp/categorization-finetuned-20220721-164940-distilled-20220811-132317
carted-nlp
2022-08-12T04:21:30Z
25
0
transformers
[ "transformers", "pytorch", "tensorboard", "onnx", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-11T13:25:02Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: categorization-finetuned-20220721-164940-distilled-20220811-132317 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # categorization-finetuned-20220721-164940-distilled-20220811-132317 This model is a fine-tuned version of [carted-nlp/categorization-finetuned-20220721-164940](https://huggingface.co/carted-nlp/categorization-finetuned-20220721-164940) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1522 - Accuracy: 0.8783 - F1: 0.8779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 314 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 2000 - num_epochs: 30.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:| | 0.5212 | 0.56 | 2500 | 0.2564 | 0.7953 | 0.7921 | | 0.243 | 1.12 | 5000 | 0.2110 | 0.8270 | 0.8249 | | 0.2105 | 1.69 | 7500 | 0.1925 | 0.8409 | 0.8391 | | 0.1939 | 2.25 | 10000 | 0.1837 | 0.8476 | 0.8465 | | 0.1838 | 2.81 | 12500 | 0.1771 | 0.8528 | 0.8517 | | 0.1729 | 3.37 | 15000 | 0.1722 | 0.8564 | 0.8555 | | 0.1687 | 3.94 | 17500 | 0.1684 | 0.8593 | 0.8576 | | 0.1602 | 4.5 | 20000 | 0.1653 | 0.8614 | 0.8604 | | 0.1572 | 5.06 | 22500 | 0.1629 | 0.8648 | 0.8638 | | 0.1507 | 5.62 | 25000 | 0.1605 | 0.8654 | 0.8646 | | 0.1483 | 6.19 | 27500 | 0.1602 | 0.8661 | 0.8653 | | 0.1431 | 6.75 | 30000 | 0.1597 | 0.8669 | 0.8663 | | 0.1393 | 7.31 | 32500 | 0.1581 | 0.8691 | 0.8687 | | 0.1374 | 7.87 | 35000 | 0.1556 | 0.8704 | 0.8697 | | 0.1321 | 8.43 | 37500 | 0.1558 | 0.8707 | 0.8700 | | 0.1328 | 9.0 | 40000 | 0.1536 | 0.8719 | 0.8711 | | 0.1261 | 9.56 | 42500 | 0.1544 | 0.8716 | 0.8708 | | 0.1256 | 10.12 | 45000 | 0.1541 | 0.8731 | 0.8725 | | 0.122 | 10.68 | 47500 | 0.1520 | 0.8741 | 0.8734 | | 0.1196 | 11.25 | 50000 | 0.1529 | 0.8734 | 0.8728 | | 0.1182 | 11.81 | 52500 | 0.1510 | 0.8758 | 0.8751 | | 0.1145 | 12.37 | 55000 | 0.1526 | 0.8746 | 0.8737 | | 0.1141 | 12.93 | 57500 | 0.1512 | 0.8765 | 0.8759 | | 0.1094 | 13.5 | 60000 | 0.1517 | 0.8760 | 0.8753 | | 0.1098 | 14.06 | 62500 | 0.1513 | 0.8771 | 0.8764 | | 0.1058 | 14.62 | 65000 | 0.1506 | 0.8775 | 0.8768 | | 0.1048 | 15.18 | 67500 | 0.1521 | 0.8774 | 0.8768 | | 0.1028 | 15.74 | 70000 | 0.1520 | 0.8778 | 0.8773 | | 0.1006 | 16.31 | 72500 | 0.1517 | 0.8780 | 0.8774 | | 0.1001 | 16.87 | 75000 | 0.1505 | 0.8794 | 0.8790 | | 0.0971 | 17.43 | 77500 | 0.1520 | 0.8784 | 0.8778 | | 0.0973 | 17.99 | 80000 | 0.1514 | 0.8796 | 0.8790 | | 0.0938 | 18.56 | 82500 | 0.1516 | 0.8795 | 0.8789 | | 0.0942 | 19.12 | 85000 | 0.1522 | 0.8794 | 0.8789 | | 0.0918 | 19.68 | 87500 | 0.1518 | 0.8799 | 0.8793 | | 0.0909 | 20.24 | 90000 | 0.1528 | 0.8803 | 0.8796 | | 0.0901 | 20.81 | 92500 | 0.1516 | 0.8799 | 0.8793 | | 0.0882 | 21.37 | 95000 | 0.1519 | 0.8800 | 0.8794 | | 0.088 | 21.93 | 97500 | 0.1517 | 0.8802 | 0.8798 | | 0.086 | 22.49 | 100000 | 0.1530 | 0.8800 | 0.8795 | | 0.0861 | 23.05 | 102500 | 0.1523 | 0.8806 | 0.8801 | | 0.0846 | 23.62 | 105000 | 0.1524 | 0.8808 | 0.8802 | | 0.0843 | 24.18 | 107500 | 0.1522 | 0.8805 | 0.8800 | | 0.0836 | 24.74 | 110000 | 0.1525 | 0.8808 | 0.8803 | | 0.083 | 25.3 | 112500 | 0.1528 | 0.8810 | 0.8803 | | 0.0829 | 25.87 | 115000 | 0.1528 | 0.8808 | 0.8802 | | 0.082 | 26.43 | 117500 | 0.1529 | 0.8808 | 0.8802 | | 0.0818 | 26.99 | 120000 | 0.1525 | 0.8811 | 0.8805 | | 0.0816 | 27.55 | 122500 | 0.1526 | 0.8811 | 0.8806 | | 0.0809 | 28.12 | 125000 | 0.1528 | 0.8810 | 0.8805 | | 0.0809 | 28.68 | 127500 | 0.1527 | 0.8810 | 0.8804 | | 0.0814 | 29.24 | 130000 | 0.1528 | 0.8808 | 0.8802 | | 0.0807 | 29.8 | 132500 | 0.1528 | 0.8808 | 0.8802 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
Lvxue/distilled-mt5-small-1-0.25
Lvxue
2022-08-12T03:22:48Z
8
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "generated_from_trainer", "en", "ro", "dataset:wmt16", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-12T02:06:48Z
--- language: - en - ro license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: distilled-mt5-small-1-0.25 results: - task: name: Translation type: translation dataset: name: wmt16 ro-en type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 4.0871 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilled-mt5-small-1-0.25 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 6.8599 - Bleu: 4.0871 - Gen Len: 35.3267 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
Lvxue/distilled-mt5-small-1-1
Lvxue
2022-08-12T03:18:55Z
16
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "generated_from_trainer", "en", "ro", "dataset:wmt16", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-12T02:08:29Z
--- language: - en - ro license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: distilled-mt5-small-1-1 results: - task: name: Translation type: translation dataset: name: wmt16 ro-en type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 6.6959 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilled-mt5-small-1-1 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 2.8289 - Bleu: 6.6959 - Gen Len: 45.7539 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
Lvxue/distilled-mt5-small-0.005-0.25
Lvxue
2022-08-12T01:28:13Z
6
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "generated_from_trainer", "en", "ro", "dataset:wmt16", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-12T00:14:33Z
--- language: - en - ro license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: distilled-mt5-small-0.005-0.25 results: - task: name: Translation type: translation dataset: name: wmt16 ro-en type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 7.6069 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilled-mt5-small-0.005-0.25 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 2.8536 - Bleu: 7.6069 - Gen Len: 45.1846 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
Lvxue/distilled-mt5-small-0.02-1
Lvxue
2022-08-12T01:25:57Z
6
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "generated_from_trainer", "en", "ro", "dataset:wmt16", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-12T00:11:11Z
--- language: - en - ro license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: distilled-mt5-small-0.02-1 results: - task: name: Translation type: translation dataset: name: wmt16 ro-en type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 7.2811 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilled-mt5-small-0.02-1 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 2.8008 - Bleu: 7.2811 - Gen Len: 45.6168 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
brookelove/finetuning-sentiment-model-3000-samples
brookelove
2022-08-12T01:02:47Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-12T00:16:58Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8633333333333333 - name: F1 type: f1 value: 0.8673139158576051 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3246 - Accuracy: 0.8633 - F1: 0.8673 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
roshan151/Model_output
roshan151
2022-08-12T00:37:05Z
62
0
transformers
[ "transformers", "tf", "bert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-08-10T22:55:31Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: roshan151/Model_output results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # roshan151/Model_output This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.9849 - Validation Loss: 2.8623 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -82, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 100, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.1673 | 2.8445 | 0 | | 2.9770 | 2.8557 | 1 | | 3.0018 | 2.8612 | 2 | | 2.9625 | 2.8496 | 3 | | 2.9849 | 2.8623 | 4 | ### Framework versions - Transformers 4.21.1 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
DOOGLAK/Article_500v9_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-12T00:22:20Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article500v9_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T00:17:10Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article500v9_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_500v9_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article500v9_wikigold_split type: article500v9_wikigold_split args: default metrics: - name: Precision type: precision value: 0.6868820039551747 - name: Recall type: recall value: 0.7021563342318059 - name: F1 type: f1 value: 0.6944351882705765 - name: Accuracy type: accuracy value: 0.9339901171644343 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_500v9_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v9_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.1975 - Precision: 0.6869 - Recall: 0.7022 - F1: 0.6944 - Accuracy: 0.9340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 61 | 0.2954 | 0.4411 | 0.5290 | 0.4811 | 0.9042 | | No log | 2.0 | 122 | 0.2061 | 0.6493 | 0.6900 | 0.6691 | 0.9315 | | No log | 3.0 | 183 | 0.1975 | 0.6869 | 0.7022 | 0.6944 | 0.9340 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_500v8_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-12T00:16:26Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article500v8_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T00:11:25Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article500v8_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_500v8_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article500v8_wikigold_split type: article500v8_wikigold_split args: default metrics: - name: Precision type: precision value: 0.6780405405405405 - name: Recall type: recall value: 0.7117021276595744 - name: F1 type: f1 value: 0.6944636678200693 - name: Accuracy type: accuracy value: 0.9363021063950914 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_500v8_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v8_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.1980 - Precision: 0.6780 - Recall: 0.7117 - F1: 0.6945 - Accuracy: 0.9363 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 64 | 0.2758 | 0.5405 | 0.5298 | 0.5351 | 0.9135 | | No log | 2.0 | 128 | 0.2129 | 0.6350 | 0.6695 | 0.6518 | 0.9296 | | No log | 3.0 | 192 | 0.1980 | 0.6780 | 0.7117 | 0.6945 | 0.9363 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_500v7_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-12T00:10:37Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article500v7_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T00:05:39Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article500v7_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_500v7_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article500v7_wikigold_split type: article500v7_wikigold_split args: default metrics: - name: Precision type: precision value: 0.6722146739130435 - name: Recall type: recall value: 0.7278411180581096 - name: F1 type: f1 value: 0.6989228324209782 - name: Accuracy type: accuracy value: 0.938498377390592 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_500v7_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v7_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.1885 - Precision: 0.6722 - Recall: 0.7278 - F1: 0.6989 - Accuracy: 0.9385 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 60 | 0.2683 | 0.4958 | 0.5388 | 0.5164 | 0.9127 | | No log | 2.0 | 120 | 0.1973 | 0.6554 | 0.6896 | 0.6720 | 0.9343 | | No log | 3.0 | 180 | 0.1885 | 0.6722 | 0.7278 | 0.6989 | 0.9385 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_500v6_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-12T00:05:01Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article500v6_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-12T00:00:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article500v6_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_500v6_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article500v6_wikigold_split type: article500v6_wikigold_split args: default metrics: - name: Precision type: precision value: 0.6462295081967213 - name: Recall type: recall value: 0.6930379746835443 - name: F1 type: f1 value: 0.6688157448252461 - name: Accuracy type: accuracy value: 0.9318540995006005 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_500v6_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v6_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2025 - Precision: 0.6462 - Recall: 0.6930 - F1: 0.6688 - Accuracy: 0.9319 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 63 | 0.2794 | 0.3775 | 0.4525 | 0.4116 | 0.8945 | | No log | 2.0 | 126 | 0.2119 | 0.6143 | 0.6670 | 0.6396 | 0.9266 | | No log | 3.0 | 189 | 0.2025 | 0.6462 | 0.6930 | 0.6688 | 0.9319 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_500v1_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T23:36:15Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article500v1_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T23:31:14Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article500v1_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_500v1_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article500v1_wikigold_split type: article500v1_wikigold_split args: default metrics: - name: Precision type: precision value: 0.6614785992217899 - name: Recall type: recall value: 0.6746031746031746 - name: F1 type: f1 value: 0.6679764243614931 - name: Accuracy type: accuracy value: 0.9325595601710446 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_500v1_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v1_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2058 - Precision: 0.6615 - Recall: 0.6746 - F1: 0.6680 - Accuracy: 0.9326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 58 | 0.3029 | 0.3539 | 0.3790 | 0.3660 | 0.8967 | | No log | 2.0 | 116 | 0.2191 | 0.6223 | 0.6488 | 0.6353 | 0.9262 | | No log | 3.0 | 174 | 0.2058 | 0.6615 | 0.6746 | 0.6680 | 0.9326 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_500v0_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T23:30:34Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article500v0_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T23:25:11Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article500v0_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_500v0_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article500v0_wikigold_split type: article500v0_wikigold_split args: default metrics: - name: Precision type: precision value: 0.6387981711299804 - name: Recall type: recall value: 0.7249814677538917 - name: F1 type: f1 value: 0.6791666666666667 - name: Accuracy type: accuracy value: 0.9364674441205053 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_500v0_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v0_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.1853 - Precision: 0.6388 - Recall: 0.7250 - F1: 0.6792 - Accuracy: 0.9365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 59 | 0.2886 | 0.4480 | 0.6179 | 0.5194 | 0.9012 | | No log | 2.0 | 118 | 0.1912 | 0.6132 | 0.6946 | 0.6514 | 0.9327 | | No log | 3.0 | 177 | 0.1853 | 0.6388 | 0.7250 | 0.6792 | 0.9365 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
BigSalmon/InformalToFormalLincoln64Paraphrase
BigSalmon
2022-08-11T23:20:25Z
161
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-08-11T22:55:51Z
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln64Paraphrase") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln64Paraphrase") ``` ``` Demo: https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy ``` ``` prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:""" input_ids = tokenizer.encode(prompt, return_tensors='pt') outputs = model.generate(input_ids=input_ids, max_length=10 + len(prompt), temperature=1.0, top_k=50, top_p=0.95, do_sample=True, num_return_sequences=5, early_stopping=True) for i in range(5): print(tokenizer.decode(outputs[i])) ``` Most likely outputs: ``` prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:""" text = tokenizer.encode(prompt) myinput, past_key_values = torch.tensor([text]), None myinput = myinput myinput= myinput.to(device) logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False) logits = logits[0,-1] probabilities = torch.nn.functional.softmax(logits) best_logits, best_indices = logits.topk(250) best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] text.append(best_indices[0].item()) best_probabilities = probabilities[best_indices].tolist() words = [] print(best_words) ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - nebraska - unicamerical legislature - different from federal house and senate text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate. *** - penny has practically no value - should be taken out of circulation - just as other coins have been in us history - lost use - value not enough - to make environmental consequences worthy text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence. ``` ngos are characterized by: □ voluntary citizens' group that is organized on a local, national or international level □ encourage political participation □ often serve humanitarian functions □ work for social, economic, or environmental change *** what are the drawbacks of living near an airbnb? □ noise □ parking □ traffic □ security □ strangers *** ``` ``` original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung. adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung. *** original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark. adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark. *** original: ``` ``` original: had trouble deciding. translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation. *** original: ``` ``` input: not loyal 1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ). *** input: ``` ``` first: ( was complicit in / was involved in ). antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ). *** first: ( have no qualms about / see no issue with ). antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ). *** first: ( do not see eye to eye / disagree often ). antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ). *** first: ``` ``` stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground. *** languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo. *** dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia. *** embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons. ``` Infill / Infilling / Masking / Phrase Masking ``` his contention [blank] by the evidence [sep] was refuted [answer] *** few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer] *** when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer] *** the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer] *** ```
DOOGLAK/Article_250v6_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T23:17:21Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article250v6_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T23:12:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article250v6_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_250v6_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article250v6_wikigold_split type: article250v6_wikigold_split args: default metrics: - name: Precision type: precision value: 0.3970455230630087 - name: Recall type: recall value: 0.3699438202247191 - name: F1 type: f1 value: 0.3830158499345645 - name: Accuracy type: accuracy value: 0.8862729247713839 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_250v6_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article250v6_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3052 - Precision: 0.3970 - Recall: 0.3699 - F1: 0.3830 - Accuracy: 0.8863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 29 | 0.5222 | 0.1785 | 0.0817 | 0.1121 | 0.8202 | | No log | 2.0 | 58 | 0.3356 | 0.3575 | 0.3357 | 0.3462 | 0.8780 | | No log | 3.0 | 87 | 0.3052 | 0.3970 | 0.3699 | 0.3830 | 0.8863 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_250v5_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T23:11:37Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article250v5_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T23:06:42Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article250v5_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_250v5_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article250v5_wikigold_split type: article250v5_wikigold_split args: default metrics: - name: Precision type: precision value: 0.3979099678456592 - name: Recall type: recall value: 0.4221148379761228 - name: F1 type: f1 value: 0.4096551724137931 - name: Accuracy type: accuracy value: 0.8778839730743538 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_250v5_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article250v5_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3250 - Precision: 0.3979 - Recall: 0.4221 - F1: 0.4097 - Accuracy: 0.8779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 31 | 0.5229 | 0.1336 | 0.0344 | 0.0547 | 0.8008 | | No log | 2.0 | 62 | 0.3701 | 0.3628 | 0.3357 | 0.3487 | 0.8596 | | No log | 3.0 | 93 | 0.3250 | 0.3979 | 0.4221 | 0.4097 | 0.8779 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_100v9_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T22:38:08Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article100v9_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T22:33:07Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article100v9_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_100v9_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article100v9_wikigold_split type: article100v9_wikigold_split args: default metrics: - name: Precision type: precision value: 0.14901960784313725 - name: Recall type: recall value: 0.03918535705078628 - name: F1 type: f1 value: 0.06205348030210247 - name: Accuracy type: accuracy value: 0.8030657373746729 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_100v9_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article100v9_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.5642 - Precision: 0.1490 - Recall: 0.0392 - F1: 0.0621 - Accuracy: 0.8031 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 13 | 0.7073 | 0.0 | 0.0 | 0.0 | 0.7816 | | No log | 2.0 | 26 | 0.6007 | 0.0734 | 0.0062 | 0.0114 | 0.7875 | | No log | 3.0 | 39 | 0.5642 | 0.1490 | 0.0392 | 0.0621 | 0.8031 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_100v8_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T22:32:37Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article100v8_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T22:27:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article100v8_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_100v8_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article100v8_wikigold_split type: article100v8_wikigold_split args: default metrics: - name: Precision type: precision value: 0.0 - name: Recall type: recall value: 0.0 - name: F1 type: f1 value: 0.0 - name: Accuracy type: accuracy value: 0.7750257997936016 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_100v8_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article100v8_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6455 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.7750 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 12 | 0.7691 | 0.0 | 0.0 | 0.0 | 0.7750 | | No log | 2.0 | 24 | 0.6860 | 0.0 | 0.0 | 0.0 | 0.7750 | | No log | 3.0 | 36 | 0.6455 | 0.0 | 0.0 | 0.0 | 0.7750 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_100v6_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T22:21:24Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article100v6_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T22:16:25Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article100v6_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_100v6_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article100v6_wikigold_split type: article100v6_wikigold_split args: default metrics: - name: Precision type: precision value: 0.0 - name: Recall type: recall value: 0.0 - name: F1 type: f1 value: 0.0 - name: Accuracy type: accuracy value: 0.7806604861399382 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_100v6_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article100v6_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.5955 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.7807 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 12 | 0.7335 | 0.0 | 0.0 | 0.0 | 0.7806 | | No log | 2.0 | 24 | 0.6302 | 0.0 | 0.0 | 0.0 | 0.7806 | | No log | 3.0 | 36 | 0.5955 | 0.0 | 0.0 | 0.0 | 0.7807 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
cansen88/PromptGenerator_5_topic_finetuned
cansen88
2022-08-11T22:13:34Z
5
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-08-11T21:39:35Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: PromptGenerator_5_topic_finetuned results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # PromptGenerator_5_topic_finetuned This model is a fine-tuned version of [kmkarakaya/turkishReviews-ds](https://huggingface.co/kmkarakaya/turkishReviews-ds) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6861 - Train Sparse Categorical Accuracy: 0.8150 - Validation Loss: 1.9777 - Validation Sparse Categorical Accuracy: 0.7250 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 3.0394 | 0.5171 | 2.7152 | 0.5841 | 0 | | 2.5336 | 0.6247 | 2.4440 | 0.6318 | 1 | | 2.2002 | 0.6958 | 2.2557 | 0.6659 | 2 | | 1.9241 | 0.7608 | 2.1059 | 0.6932 | 3 | | 1.6861 | 0.8150 | 1.9777 | 0.7250 | 4 | ### Framework versions - Transformers 4.21.1 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
DOOGLAK/Article_100v3_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T22:04:16Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article100v3_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T21:59:14Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article100v3_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_100v3_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article100v3_wikigold_split type: article100v3_wikigold_split args: default metrics: - name: Precision type: precision value: 0.0 - name: Recall type: recall value: 0.0 - name: F1 type: f1 value: 0.0 - name: Accuracy type: accuracy value: 0.7772145452862069 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_100v3_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article100v3_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6272 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.7772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 11 | 0.7637 | 0.0 | 0.0 | 0.0 | 0.7772 | | No log | 2.0 | 22 | 0.6651 | 0.0 | 0.0 | 0.0 | 0.7772 | | No log | 3.0 | 33 | 0.6272 | 0.0 | 0.0 | 0.0 | 0.7772 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_100v1_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T21:53:29Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article100v1_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T21:48:36Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article100v1_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_100v1_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article100v1_wikigold_split type: article100v1_wikigold_split args: default metrics: - name: Precision type: precision value: 0.06 - name: Recall type: recall value: 0.0015592515592515593 - name: F1 type: f1 value: 0.00303951367781155 - name: Accuracy type: accuracy value: 0.7832046377355834 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_100v1_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article100v1_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.5783 - Precision: 0.06 - Recall: 0.0016 - F1: 0.0030 - Accuracy: 0.7832 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 13 | 0.7124 | 0.0 | 0.0 | 0.0 | 0.7816 | | No log | 2.0 | 26 | 0.6131 | 0.0 | 0.0 | 0.0 | 0.7819 | | No log | 3.0 | 39 | 0.5783 | 0.06 | 0.0016 | 0.0030 | 0.7832 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_100v0_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T21:48:03Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article100v0_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T21:43:03Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article100v0_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_100v0_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article100v0_wikigold_split type: article100v0_wikigold_split args: default metrics: - name: Precision type: precision value: 0.25 - name: Recall type: recall value: 0.0002523977788995457 - name: F1 type: f1 value: 0.0005042864346949066 - name: Accuracy type: accuracy value: 0.7772140114046316 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_100v0_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article100v0_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6037 - Precision: 0.25 - Recall: 0.0003 - F1: 0.0005 - Accuracy: 0.7772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 12 | 0.7472 | 0.0 | 0.0 | 0.0 | 0.7772 | | No log | 2.0 | 24 | 0.6443 | 0.0 | 0.0 | 0.0 | 0.7772 | | No log | 3.0 | 36 | 0.6037 | 0.25 | 0.0003 | 0.0005 | 0.7772 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_50v9_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T21:42:33Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article50v9_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T21:37:41Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article50v9_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_50v9_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article50v9_wikigold_split type: article50v9_wikigold_split args: default metrics: - name: Precision type: precision value: 0.0 - name: Recall type: recall value: 0.0 - name: F1 type: f1 value: 0.0 - name: Accuracy type: accuracy value: 0.7781540876976561 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_50v9_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article50v9_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.7640 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.7782 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 6 | 0.9810 | 0.0918 | 0.0044 | 0.0084 | 0.7772 | | No log | 2.0 | 12 | 0.7952 | 0.0 | 0.0 | 0.0 | 0.7782 | | No log | 3.0 | 18 | 0.7640 | 0.0 | 0.0 | 0.0 | 0.7782 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_50v8_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T21:37:12Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article50v8_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T21:32:16Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article50v8_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_50v8_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article50v8_wikigold_split type: article50v8_wikigold_split args: default metrics: - name: Precision type: precision value: 0.0 - name: Recall type: recall value: 0.0 - name: F1 type: f1 value: 0.0 - name: Accuracy type: accuracy value: 0.7786409940669428 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_50v8_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article50v8_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.7555 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.7786 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 6 | 0.9789 | 0.1 | 0.0047 | 0.0089 | 0.7776 | | No log | 2.0 | 12 | 0.7892 | 0.0 | 0.0 | 0.0 | 0.7786 | | No log | 3.0 | 18 | 0.7555 | 0.0 | 0.0 | 0.0 | 0.7786 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_50v7_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T21:31:46Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article50v7_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T21:26:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article50v7_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_50v7_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article50v7_wikigold_split type: article50v7_wikigold_split args: default metrics: - name: Precision type: precision value: 0.3333333333333333 - name: Recall type: recall value: 0.00024324981756263683 - name: F1 type: f1 value: 0.0004861448711716091 - name: Accuracy type: accuracy value: 0.7783221476510067 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_50v7_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article50v7_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.7894 - Precision: 0.3333 - Recall: 0.0002 - F1: 0.0005 - Accuracy: 0.7783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 6 | 1.0271 | 0.1183 | 0.0102 | 0.0188 | 0.7768 | | No log | 2.0 | 12 | 0.8250 | 0.4 | 0.0005 | 0.0010 | 0.7783 | | No log | 3.0 | 18 | 0.7894 | 0.3333 | 0.0002 | 0.0005 | 0.7783 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_50v6_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T21:26:06Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article50v6_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T21:21:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article50v6_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_50v6_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article50v6_wikigold_split type: article50v6_wikigold_split args: default metrics: - name: Precision type: precision value: 0.0 - name: Recall type: recall value: 0.0 - name: F1 type: f1 value: 0.0 - name: Accuracy type: accuracy value: 0.7772842497251946 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_50v6_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article50v6_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.7622 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.7773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 7 | 0.9429 | 0.1579 | 0.0015 | 0.0029 | 0.7769 | | No log | 2.0 | 14 | 0.7845 | 0.0 | 0.0 | 0.0 | 0.7773 | | No log | 3.0 | 21 | 0.7622 | 0.0 | 0.0 | 0.0 | 0.7773 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_50v4_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T21:15:09Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article50v4_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T21:10:23Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article50v4_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_50v4_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article50v4_wikigold_split type: article50v4_wikigold_split args: default metrics: - name: Precision type: precision value: 0.0 - name: Recall type: recall value: 0.0 - name: F1 type: f1 value: 0.0 - name: Accuracy type: accuracy value: 0.7775440794773114 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_50v4_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article50v4_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.7543 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.7775 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 6 | 0.9689 | 0.0949 | 0.0036 | 0.0070 | 0.7766 | | No log | 2.0 | 12 | 0.7856 | 0.0 | 0.0 | 0.0 | 0.7775 | | No log | 3.0 | 18 | 0.7543 | 0.0 | 0.0 | 0.0 | 0.7775 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Article_50v1_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
2022-08-11T20:58:43Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:article50v1_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T20:53:51Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - article50v1_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Article_50v1_NER_Model_3Epochs_UNAUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: article50v1_wikigold_split type: article50v1_wikigold_split args: default metrics: - name: Precision type: precision value: 0.0 - name: Recall type: recall value: 0.0 - name: F1 type: f1 value: 0.0 - name: Accuracy type: accuracy value: 0.7774799531489324 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_50v1_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article50v1_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.7237 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.7775 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 7 | 0.9016 | 0.12 | 0.0007 | 0.0015 | 0.7772 | | No log | 2.0 | 14 | 0.7468 | 0.0 | 0.0 | 0.0 | 0.7775 | | No log | 3.0 | 21 | 0.7237 | 0.0 | 0.0 | 0.0 | 0.7775 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_Uni_500v6_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T20:25:04Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni500v6_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T20:19:30Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni500v6_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_500v6_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni500v6_wikigold_split type: tagged_uni500v6_wikigold_split args: default metrics: - name: Precision type: precision value: 0.699155524278677 - name: Recall type: recall value: 0.6986638537271449 - name: F1 type: f1 value: 0.6989096025325361 - name: Accuracy type: accuracy value: 0.9317908843795436 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_500v6_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni500v6_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2386 - Precision: 0.6992 - Recall: 0.6987 - F1: 0.6989 - Accuracy: 0.9318 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 182 | 0.2452 | 0.5956 | 0.5432 | 0.5682 | 0.9189 | | No log | 2.0 | 364 | 0.2571 | 0.6832 | 0.6354 | 0.6584 | 0.9204 | | 0.1093 | 3.0 | 546 | 0.2386 | 0.6992 | 0.6987 | 0.6989 | 0.9318 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
cansen88/PromptGenerator_32_topic_finetuned
cansen88
2022-08-11T20:18:21Z
63
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-08-11T19:49:31Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: PromptGenerator_32_topic_finetuned results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # PromptGenerator_32_topic_finetuned This model is a fine-tuned version of [kmkarakaya/turkishReviews-ds](https://huggingface.co/kmkarakaya/turkishReviews-ds) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0569 - Train Sparse Categorical Accuracy: 1.0 - Validation Loss: 0.0787 - Validation Sparse Categorical Accuracy: 1.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 1.7185 | 0.7860 | 0.5569 | 0.9868 | 0 | | 0.4711 | 0.9958 | 0.2097 | 0.9995 | 1 | | 0.2016 | 1.0000 | 0.1197 | 0.9999 | 2 | | 0.1014 | 1.0 | 0.0903 | 0.9999 | 3 | | 0.0569 | 1.0 | 0.0787 | 1.0 | 4 | ### Framework versions - Transformers 4.21.1 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
DOOGLAK/Tagged_Uni_500v2_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T20:01:39Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni500v2_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T19:56:44Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni500v2_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_500v2_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni500v2_wikigold_split type: tagged_uni500v2_wikigold_split args: default metrics: - name: Precision type: precision value: 0.7018014564967421 - name: Recall type: recall value: 0.6811755952380952 - name: F1 type: f1 value: 0.6913347177647726 - name: Accuracy type: accuracy value: 0.926232333678042 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_500v2_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni500v2_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2645 - Precision: 0.7018 - Recall: 0.6812 - F1: 0.6913 - Accuracy: 0.9262 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 171 | 0.2364 | 0.6168 | 0.5804 | 0.5980 | 0.9178 | | No log | 2.0 | 342 | 0.2626 | 0.6815 | 0.6417 | 0.6610 | 0.9210 | | 0.1121 | 3.0 | 513 | 0.2645 | 0.7018 | 0.6812 | 0.6913 | 0.9262 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
0x-YuAN/CL_1
0x-YuAN
2022-08-11T19:56:16Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain", "zh", "dataset:yuan1729/autotrain-data-YuAN-lawthone-CL_facts_backTrans", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-11T18:47:58Z
--- tags: - autotrain - text-classification language: - zh widget: - text: "I love AutoTrain 🤗" datasets: - yuan1729/autotrain-data-YuAN-lawthone-CL_facts_backTrans co2_eq_emissions: emissions: 151.97297148175758 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1241547318 - CO2 Emissions (in grams): 151.9730 ## Validation Metrics - Loss: 0.512 - Accuracy: 0.862 - Macro F1: 0.862 - Micro F1: 0.862 - Weighted F1: 0.862 - Macro Precision: 0.863 - Micro Precision: 0.862 - Weighted Precision: 0.863 - Macro Recall: 0.862 - Micro Recall: 0.862 - Weighted Recall: 0.862 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/yuan1729/autotrain-YuAN-lawthone-CL_facts_backTrans-1241547318 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("yuan1729/autotrain-YuAN-lawthone-CL_facts_backTrans-1241547318", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("yuan1729/autotrain-YuAN-lawthone-CL_facts_backTrans-1241547318", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
DOOGLAK/Tagged_Uni_250v8_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T19:39:39Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni250v8_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T19:35:03Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni250v8_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_250v8_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni250v8_wikigold_split type: tagged_uni250v8_wikigold_split args: default metrics: - name: Precision type: precision value: 0.5548306927617273 - name: Recall type: recall value: 0.4939159292035398 - name: F1 type: f1 value: 0.5226042428675933 - name: Accuracy type: accuracy value: 0.8976334059696954 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_250v8_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni250v8_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3186 - Precision: 0.5548 - Recall: 0.4939 - F1: 0.5226 - Accuracy: 0.8976 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 95 | 0.4132 | 0.3646 | 0.2008 | 0.2590 | 0.8504 | | No log | 2.0 | 190 | 0.2983 | 0.5077 | 0.4552 | 0.4800 | 0.8977 | | No log | 3.0 | 285 | 0.3186 | 0.5548 | 0.4939 | 0.5226 | 0.8976 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_Uni_250v7_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T19:34:30Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni250v7_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T19:29:51Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni250v7_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_250v7_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni250v7_wikigold_split type: tagged_uni250v7_wikigold_split args: default metrics: - name: Precision type: precision value: 0.5764667106130521 - name: Recall type: recall value: 0.4908784731967443 - name: F1 type: f1 value: 0.5302410186448385 - name: Accuracy type: accuracy value: 0.8988380555625267 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_250v7_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni250v7_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3477 - Precision: 0.5765 - Recall: 0.4909 - F1: 0.5302 - Accuracy: 0.8988 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 90 | 0.3902 | 0.2262 | 0.1524 | 0.1821 | 0.8474 | | No log | 2.0 | 180 | 0.3612 | 0.5340 | 0.4471 | 0.4867 | 0.8914 | | No log | 3.0 | 270 | 0.3477 | 0.5765 | 0.4909 | 0.5302 | 0.8988 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_Uni_250v5_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T19:23:01Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni250v5_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T19:17:54Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni250v5_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_250v5_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni250v5_wikigold_split type: tagged_uni250v5_wikigold_split args: default metrics: - name: Precision type: precision value: 0.5808346213292117 - name: Recall type: recall value: 0.5341102899374645 - name: F1 type: f1 value: 0.5564934103361469 - name: Accuracy type: accuracy value: 0.9006217563331792 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_250v5_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni250v5_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3324 - Precision: 0.5808 - Recall: 0.5341 - F1: 0.5565 - Accuracy: 0.9006 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 99 | 0.4305 | 0.3110 | 0.2149 | 0.2542 | 0.8533 | | No log | 2.0 | 198 | 0.3340 | 0.5449 | 0.4935 | 0.5179 | 0.8956 | | No log | 3.0 | 297 | 0.3324 | 0.5808 | 0.5341 | 0.5565 | 0.9006 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_Uni_250v1_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T19:00:36Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni250v1_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T18:55:24Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni250v1_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_250v1_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni250v1_wikigold_split type: tagged_uni250v1_wikigold_split args: default metrics: - name: Precision type: precision value: 0.5971956660293181 - name: Recall type: recall value: 0.5290796160361377 - name: F1 type: f1 value: 0.5610778443113772 - name: Accuracy type: accuracy value: 0.906793008840565 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_250v1_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni250v1_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3057 - Precision: 0.5972 - Recall: 0.5291 - F1: 0.5611 - Accuracy: 0.9068 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 87 | 0.3972 | 0.2749 | 0.2081 | 0.2369 | 0.8625 | | No log | 2.0 | 174 | 0.2895 | 0.5545 | 0.5054 | 0.5288 | 0.9059 | | No log | 3.0 | 261 | 0.3057 | 0.5972 | 0.5291 | 0.5611 | 0.9068 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_Uni_250v0_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T18:54:52Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni250v0_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T18:49:41Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni250v0_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_250v0_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni250v0_wikigold_split type: tagged_uni250v0_wikigold_split args: default metrics: - name: Precision type: precision value: 0.4747682801235839 - name: Recall type: recall value: 0.37317862924986506 - name: F1 type: f1 value: 0.41788789847408975 - name: Accuracy type: accuracy value: 0.8846524500234748 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_250v0_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni250v0_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3679 - Precision: 0.4748 - Recall: 0.3732 - F1: 0.4179 - Accuracy: 0.8847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 91 | 0.4333 | 0.2856 | 0.1851 | 0.2246 | 0.8440 | | No log | 2.0 | 182 | 0.3466 | 0.3907 | 0.3038 | 0.3418 | 0.8794 | | No log | 3.0 | 273 | 0.3679 | 0.4748 | 0.3732 | 0.4179 | 0.8847 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_Uni_100v9_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T18:49:10Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni100v9_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T18:44:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni100v9_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_100v9_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni100v9_wikigold_split type: tagged_uni100v9_wikigold_split args: default metrics: - name: Precision type: precision value: 0.3227436823104693 - name: Recall type: recall value: 0.23047177107501934 - name: F1 type: f1 value: 0.268912618438863 - name: Accuracy type: accuracy value: 0.8556973163220414 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_100v9_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v9_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.4080 - Precision: 0.3227 - Recall: 0.2305 - F1: 0.2689 - Accuracy: 0.8557 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 39 | 0.4881 | 0.2185 | 0.0487 | 0.0797 | 0.8066 | | No log | 2.0 | 78 | 0.4431 | 0.2831 | 0.1536 | 0.1992 | 0.8387 | | No log | 3.0 | 117 | 0.4080 | 0.3227 | 0.2305 | 0.2689 | 0.8557 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
andres-hsn/Reinforce-AndresV0
andres-hsn
2022-08-11T18:47:14Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-08-11T18:42:39Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-AndresV0 results: - metrics: - type: mean_reward value: 64.50 +/- 5.39 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
DOOGLAK/Tagged_Uni_100v8_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T18:43:35Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni100v8_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T18:38:39Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni100v8_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_100v8_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni100v8_wikigold_split type: tagged_uni100v8_wikigold_split args: default metrics: - name: Precision type: precision value: 0.23410202655485673 - name: Recall type: recall value: 0.08220858895705521 - name: F1 type: f1 value: 0.12168543407192152 - name: Accuracy type: accuracy value: 0.8133929595229905 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_100v8_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v8_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.5374 - Precision: 0.2341 - Recall: 0.0822 - F1: 0.1217 - Accuracy: 0.8134 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 39 | 0.5752 | 0.0227 | 0.0002 | 0.0005 | 0.7844 | | No log | 2.0 | 78 | 0.5425 | 0.2209 | 0.0498 | 0.0813 | 0.8052 | | No log | 3.0 | 117 | 0.5374 | 0.2341 | 0.0822 | 0.1217 | 0.8134 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_Uni_100v5_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T18:26:29Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni100v5_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T18:22:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni100v5_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_100v5_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni100v5_wikigold_split type: tagged_uni100v5_wikigold_split args: default metrics: - name: Precision type: precision value: 0.27475592747559274 - name: Recall type: recall value: 0.20112302194997447 - name: F1 type: f1 value: 0.2322428529325081 - name: Accuracy type: accuracy value: 0.8489666875886277 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_100v5_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v5_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.4479 - Precision: 0.2748 - Recall: 0.2011 - F1: 0.2322 - Accuracy: 0.8490 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 39 | 0.4908 | 0.2544 | 0.1445 | 0.1843 | 0.8292 | | No log | 2.0 | 78 | 0.4703 | 0.2611 | 0.1881 | 0.2187 | 0.8437 | | No log | 3.0 | 117 | 0.4479 | 0.2748 | 0.2011 | 0.2322 | 0.8490 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_Uni_100v4_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T18:21:22Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni100v4_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T18:16:02Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni100v4_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_100v4_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni100v4_wikigold_split type: tagged_uni100v4_wikigold_split args: default metrics: - name: Precision type: precision value: 0.25279187817258886 - name: Recall type: recall value: 0.19148936170212766 - name: F1 type: f1 value: 0.2179113185530922 - name: Accuracy type: accuracy value: 0.8640945027509362 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_100v4_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v4_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3691 - Precision: 0.2528 - Recall: 0.1915 - F1: 0.2179 - Accuracy: 0.8641 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 34 | 0.5215 | 0.1087 | 0.0026 | 0.0050 | 0.7980 | | No log | 2.0 | 68 | 0.3908 | 0.2356 | 0.1515 | 0.1844 | 0.8527 | | No log | 3.0 | 102 | 0.3691 | 0.2528 | 0.1915 | 0.2179 | 0.8641 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_Uni_50v9_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T17:52:48Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni50v9_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T17:47:33Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni50v9_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_50v9_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni50v9_wikigold_split type: tagged_uni50v9_wikigold_split args: default metrics: - name: Precision type: precision value: 0.5 - name: Recall type: recall value: 0.000243605359317905 - name: F1 type: f1 value: 0.00048697345994643296 - name: Accuracy type: accuracy value: 0.7843220814175171 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_50v9_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v9_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6233 - Precision: 0.5 - Recall: 0.0002 - F1: 0.0005 - Accuracy: 0.7843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 16 | 0.7531 | 0.0 | 0.0 | 0.0 | 0.7788 | | No log | 2.0 | 32 | 0.6599 | 0.5 | 0.0002 | 0.0005 | 0.7823 | | No log | 3.0 | 48 | 0.6233 | 0.5 | 0.0002 | 0.0005 | 0.7843 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_Uni_50v7_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T17:41:22Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni50v7_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T17:37:16Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni50v7_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_50v7_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni50v7_wikigold_split type: tagged_uni50v7_wikigold_split args: default metrics: - name: Precision type: precision value: 0.0 - name: Recall type: recall value: 0.0 - name: F1 type: f1 value: 0.0 - name: Accuracy type: accuracy value: 0.7783445190156599 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_50v7_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v7_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6772 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.7783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 12 | 0.7850 | 0.0 | 0.0 | 0.0 | 0.7783 | | No log | 2.0 | 24 | 0.7010 | 0.0 | 0.0 | 0.0 | 0.7783 | | No log | 3.0 | 36 | 0.6772 | 0.0 | 0.0 | 0.0 | 0.7783 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_Uni_50v5_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T17:31:02Z
105
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni50v5_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T17:26:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni50v5_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_50v5_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni50v5_wikigold_split type: tagged_uni50v5_wikigold_split args: default metrics: - name: Precision type: precision value: 0.23113964686998395 - name: Recall type: recall value: 0.03495994173343044 - name: F1 type: f1 value: 0.06073386756642767 - name: Accuracy type: accuracy value: 0.7909374089595052 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_50v5_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v5_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6039 - Precision: 0.2311 - Recall: 0.0350 - F1: 0.0607 - Accuracy: 0.7909 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 26 | 0.6534 | 0.0 | 0.0 | 0.0 | 0.7773 | | No log | 2.0 | 52 | 0.6056 | 0.1294 | 0.0097 | 0.0181 | 0.7846 | | No log | 3.0 | 78 | 0.6039 | 0.2311 | 0.0350 | 0.0607 | 0.7909 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_Uni_50v3_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T17:20:04Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni50v3_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T17:14:45Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni50v3_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_50v3_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni50v3_wikigold_split type: tagged_uni50v3_wikigold_split args: default metrics: - name: Precision type: precision value: 0.14766839378238342 - name: Recall type: recall value: 0.013980868285504048 - name: F1 type: f1 value: 0.025543356486668164 - name: Accuracy type: accuracy value: 0.7865287304621612 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_50v3_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v3_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.5987 - Precision: 0.1477 - Recall: 0.0140 - F1: 0.0255 - Accuracy: 0.7865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 14 | 0.7260 | 0.0 | 0.0 | 0.0 | 0.7789 | | No log | 2.0 | 28 | 0.6256 | 0.1436 | 0.0140 | 0.0255 | 0.7865 | | No log | 3.0 | 42 | 0.5987 | 0.1477 | 0.0140 | 0.0255 | 0.7865 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_Uni_50v2_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T17:14:13Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni50v2_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T17:08:48Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni50v2_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_50v2_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni50v2_wikigold_split type: tagged_uni50v2_wikigold_split args: default metrics: - name: Precision type: precision value: 0.08 - name: Recall type: recall value: 0.0004884004884004884 - name: F1 type: f1 value: 0.0009708737864077671 - name: Accuracy type: accuracy value: 0.7850352033723486 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_50v2_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v2_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6159 - Precision: 0.08 - Recall: 0.0005 - F1: 0.0010 - Accuracy: 0.7850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 16 | 0.7399 | 0.0 | 0.0 | 0.0 | 0.7779 | | No log | 2.0 | 32 | 0.6545 | 0.0833 | 0.0002 | 0.0005 | 0.7817 | | No log | 3.0 | 48 | 0.6159 | 0.08 | 0.0005 | 0.0010 | 0.7850 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_Uni_50v1_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T17:08:03Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_uni50v1_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T17:03:55Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_uni50v1_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_Uni_50v1_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_uni50v1_wikigold_split type: tagged_uni50v1_wikigold_split args: default metrics: - name: Precision type: precision value: 0.14664804469273743 - name: Recall type: recall value: 0.025647288715192965 - name: F1 type: f1 value: 0.043659043659043655 - name: Accuracy type: accuracy value: 0.7940580232453374 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_50v1_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v1_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.5851 - Precision: 0.1466 - Recall: 0.0256 - F1: 0.0437 - Accuracy: 0.7941 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 24 | 0.6704 | 0.0 | 0.0 | 0.0 | 0.7775 | | No log | 2.0 | 48 | 0.5824 | 0.1479 | 0.0154 | 0.0279 | 0.7895 | | No log | 3.0 | 72 | 0.5851 | 0.1466 | 0.0256 | 0.0437 | 0.7941 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
Adapting/cifar10-image-classification
Adapting
2022-08-11T16:58:16Z
0
0
null
[ "pytorch", "region:us" ]
null
2022-08-11T16:43:20Z
# how to use ```python # !pip install transformers import torch.nn as nn import torch.nn.functional as F from huggingface_hub import PyTorchModelHubMixin class Net(nn.Module,PyTorchModelHubMixin): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = torch.flatten(x, 1) # flatten all dimensions except batch x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net.from_pretrained('Adapting/cifar10-image-classification') ``` example codes for testing the model: [link](https://colab.research.google.com/drive/10xjbgSzw-U1Y4vCot5aqqdOi7AhmIkC3?usp=sharing)
DOOGLAK/Tagged_One_500v9_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T16:57:16Z
96
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one500v9_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T16:52:29Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one500v9_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_500v9_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one500v9_wikigold_split type: tagged_one500v9_wikigold_split args: default metrics: - name: Precision type: precision value: 0.7016183412002697 - name: Recall type: recall value: 0.7011455525606469 - name: F1 type: f1 value: 0.7013818672059319 - name: Accuracy type: accuracy value: 0.9284582154955403 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_500v9_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v9_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2469 - Precision: 0.7016 - Recall: 0.7011 - F1: 0.7014 - Accuracy: 0.9285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 170 | 0.2908 | 0.5414 | 0.4538 | 0.4938 | 0.9011 | | No log | 2.0 | 340 | 0.2680 | 0.6629 | 0.6253 | 0.6436 | 0.9172 | | 0.1121 | 3.0 | 510 | 0.2469 | 0.7016 | 0.7011 | 0.7014 | 0.9285 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
waynedsouza/distilbert-base-uncased-gc-art2e
waynedsouza
2022-08-11T16:45:26Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-11T16:39:42Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-gc-art2e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-gc-art2e This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0863 - Accuracy: 0.982 - F1: 0.9731 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.0875 | 1.0 | 32 | 0.0874 | 0.982 | 0.9731 | | 0.0711 | 2.0 | 64 | 0.0863 | 0.982 | 0.9731 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
DOOGLAK/Tagged_One_500v6_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T16:39:36Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one500v6_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T16:33:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one500v6_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_500v6_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one500v6_wikigold_split type: tagged_one500v6_wikigold_split args: default metrics: - name: Precision type: precision value: 0.6866690621631333 - name: Recall type: recall value: 0.6719409282700421 - name: F1 type: f1 value: 0.679225164385996 - name: Accuracy type: accuracy value: 0.9239838169290094 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_500v6_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v6_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2690 - Precision: 0.6867 - Recall: 0.6719 - F1: 0.6792 - Accuracy: 0.9240 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 189 | 0.2819 | 0.6009 | 0.5352 | 0.5661 | 0.9105 | | No log | 2.0 | 378 | 0.2614 | 0.6743 | 0.6406 | 0.6571 | 0.9201 | | 0.11 | 3.0 | 567 | 0.2690 | 0.6867 | 0.6719 | 0.6792 | 0.9240 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
QianMolloy/ppo-LunarLander-v2
QianMolloy
2022-08-11T16:23:15Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-08-11T16:22:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 250.97 +/- 23.38 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DOOGLAK/Tagged_One_500v3_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T16:21:20Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one500v3_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T16:16:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one500v3_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_500v3_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one500v3_wikigold_split type: tagged_one500v3_wikigold_split args: default metrics: - name: Precision type: precision value: 0.697499143542309 - name: Recall type: recall value: 0.6782145236508994 - name: F1 type: f1 value: 0.6877216686370546 - name: Accuracy type: accuracy value: 0.9245400105495051 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_500v3_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v3_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2659 - Precision: 0.6975 - Recall: 0.6782 - F1: 0.6877 - Accuracy: 0.9245 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 175 | 0.2990 | 0.5405 | 0.4600 | 0.4970 | 0.9007 | | No log | 2.0 | 350 | 0.2789 | 0.6837 | 0.6236 | 0.6523 | 0.9157 | | 0.1081 | 3.0 | 525 | 0.2659 | 0.6975 | 0.6782 | 0.6877 | 0.9245 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_One_500v0_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T16:03:05Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one500v0_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T15:57:42Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one500v0_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_500v0_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one500v0_wikigold_split type: tagged_one500v0_wikigold_split args: default metrics: - name: Precision type: precision value: 0.6663055254604551 - name: Recall type: recall value: 0.683839881393625 - name: F1 type: f1 value: 0.6749588439729285 - name: Accuracy type: accuracy value: 0.9260204081632653 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_500v0_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v0_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2679 - Precision: 0.6663 - Recall: 0.6838 - F1: 0.6750 - Accuracy: 0.9260 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 173 | 0.2827 | 0.5972 | 0.5556 | 0.5757 | 0.9079 | | No log | 2.0 | 346 | 0.2668 | 0.6442 | 0.6383 | 0.6412 | 0.9204 | | 0.1142 | 3.0 | 519 | 0.2679 | 0.6663 | 0.6838 | 0.6750 | 0.9260 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
TheJarmanitor/q-FrozenLake-v1-4x4-noSlippery
TheJarmanitor
2022-08-11T15:55:03Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-08-11T15:51:58Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="TheJarmanitor/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
DOOGLAK/Tagged_One_250v7_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T15:45:15Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one250v7_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T15:40:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one250v7_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_250v7_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one250v7_wikigold_split type: tagged_one250v7_wikigold_split args: default metrics: - name: Precision type: precision value: 0.5509259259259259 - name: Recall type: recall value: 0.4675834970530452 - name: F1 type: f1 value: 0.5058448459086079 - name: Accuracy type: accuracy value: 0.8893517705222476 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_250v7_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v7_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3809 - Precision: 0.5509 - Recall: 0.4676 - F1: 0.5058 - Accuracy: 0.8894 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 87 | 0.4450 | 0.1912 | 0.1047 | 0.1353 | 0.8278 | | No log | 2.0 | 174 | 0.3903 | 0.4992 | 0.4176 | 0.4548 | 0.8820 | | No log | 3.0 | 261 | 0.3809 | 0.5509 | 0.4676 | 0.5058 | 0.8894 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
Felipehonorato/storIA
Felipehonorato
2022-08-11T15:38:21Z
1,181
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
This model was fine-tuned to generate horror stories in a collaborative way. Check it out on our [repo](https://github.com/TailUFPB/storIA).
DOOGLAK/Tagged_One_250v4_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T15:27:24Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one250v4_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T15:22:07Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one250v4_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_250v4_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one250v4_wikigold_split type: tagged_one250v4_wikigold_split args: default metrics: - name: Precision type: precision value: 0.568499837292548 - name: Recall type: recall value: 0.48473917869034405 - name: F1 type: f1 value: 0.5232889022015875 - name: Accuracy type: accuracy value: 0.8927736584139752 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_250v4_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v4_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3389 - Precision: 0.5685 - Recall: 0.4847 - F1: 0.5233 - Accuracy: 0.8928 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 87 | 0.4018 | 0.2797 | 0.1842 | 0.2221 | 0.8514 | | No log | 2.0 | 174 | 0.3266 | 0.5245 | 0.4398 | 0.4784 | 0.8888 | | No log | 3.0 | 261 | 0.3389 | 0.5685 | 0.4847 | 0.5233 | 0.8928 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_One_250v3_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T15:21:37Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one250v3_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T15:16:29Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one250v3_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_250v3_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one250v3_wikigold_split type: tagged_one250v3_wikigold_split args: default metrics: - name: Precision type: precision value: 0.5783339046966061 - name: Recall type: recall value: 0.4806267806267806 - name: F1 type: f1 value: 0.5249727711218297 - name: Accuracy type: accuracy value: 0.8981560947699669 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_250v3_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v3_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3179 - Precision: 0.5783 - Recall: 0.4806 - F1: 0.5250 - Accuracy: 0.8982 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 81 | 0.3974 | 0.2778 | 0.1869 | 0.2235 | 0.8530 | | No log | 2.0 | 162 | 0.3095 | 0.5594 | 0.4470 | 0.4969 | 0.8944 | | No log | 3.0 | 243 | 0.3179 | 0.5783 | 0.4806 | 0.5250 | 0.8982 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_One_250v2_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T15:16:03Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one250v2_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T15:10:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one250v2_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_250v2_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one250v2_wikigold_split type: tagged_one250v2_wikigold_split args: default metrics: - name: Precision type: precision value: 0.5859220092531394 - name: Recall type: recall value: 0.5074413279908414 - name: F1 type: f1 value: 0.5438650306748466 - name: Accuracy type: accuracy value: 0.8979617609173338 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_250v2_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v2_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3573 - Precision: 0.5859 - Recall: 0.5074 - F1: 0.5439 - Accuracy: 0.8980 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 93 | 0.3884 | 0.2899 | 0.2006 | 0.2371 | 0.8583 | | No log | 2.0 | 186 | 0.3502 | 0.5467 | 0.4705 | 0.5058 | 0.8937 | | No log | 3.0 | 279 | 0.3573 | 0.5859 | 0.5074 | 0.5439 | 0.8980 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_One_250v0_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T15:04:33Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one250v0_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T14:59:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one250v0_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_250v0_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one250v0_wikigold_split type: tagged_one250v0_wikigold_split args: default metrics: - name: Precision type: precision value: 0.5125421190565331 - name: Recall type: recall value: 0.3694009713977334 - name: F1 type: f1 value: 0.4293554963148816 - name: Accuracy type: accuracy value: 0.8786972744569918 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_250v0_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v0_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.4287 - Precision: 0.5125 - Recall: 0.3694 - F1: 0.4294 - Accuracy: 0.8787 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 96 | 0.4352 | 0.3056 | 0.1692 | 0.2178 | 0.8448 | | No log | 2.0 | 192 | 0.3881 | 0.4394 | 0.3295 | 0.3766 | 0.8773 | | No log | 3.0 | 288 | 0.4287 | 0.5125 | 0.3694 | 0.4294 | 0.8787 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
Cube/ShijiBERT
Cube
2022-08-11T14:39:40Z
2
0
transformers
[ "transformers", "bert", "fill-mask", "zh", "license:apache-2.0", "endpoints_compatible", "region:us" ]
fill-mask
2022-08-11T14:01:58Z
--- language: - "zh" license: "apache-2.0" pipeline_tag: "fill-mask" mask_token: "[MASK]" widget: - text: "[MASK]太元中,武陵人捕鱼为业。" - text: "问征夫以前路,恨晨光之[MASK]微。" - text: "浔阳江头夜送客,枫叶[MASK]花秋瑟瑟。" ---
harish/t5-e2e-2epochs-lr1e4-alpha0-5
harish
2022-08-11T14:22:13Z
7
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-11T14:17:21Z
--- license: cc-by-nc-sa-4.0 ---
DOOGLAK/Tagged_One_100v2_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T14:19:11Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one100v2_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T14:13:51Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one100v2_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_100v2_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one100v2_wikigold_split type: tagged_one100v2_wikigold_split args: default metrics: - name: Precision type: precision value: 0.29022988505747127 - name: Recall type: recall value: 0.12856415478615071 - name: F1 type: f1 value: 0.17819336626676077 - name: Accuracy type: accuracy value: 0.833149450650485 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_100v2_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v2_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.4407 - Precision: 0.2902 - Recall: 0.1286 - F1: 0.1782 - Accuracy: 0.8331 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 40 | 0.5318 | 0.2817 | 0.0204 | 0.0380 | 0.7978 | | No log | 2.0 | 80 | 0.4431 | 0.2932 | 0.1146 | 0.1647 | 0.8291 | | No log | 3.0 | 120 | 0.4407 | 0.2902 | 0.1286 | 0.1782 | 0.8331 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
ai4bharat/IndicBART-XLSum
ai4bharat
2022-08-11T14:17:37Z
143
3
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "multilingual", "nlp", "indicnlp", "bn", "gu", "hi", "mr", "pa", "ta", "te", "dataset:csebuetnlp/xlsum", "arxiv:2109.02903", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-11T09:24:38Z
--- language: - bn - gu - hi - mr - pa - ta - te datasets: - csebuetnlp/xlsum tags: - multilingual - nlp - indicnlp widget: - टेसा जॉवल का कहना है कि मृतकों और लापता लोगों के परिजनों की मदद के लिए एक केंद्र स्थापित किया जा रहा है. उन्होंने इस हादसे के तीन के बाद भी मृतकों की सूची जारी करने में हो रही देरी के बारे में स्पष्टीकरण देते हुए कहा है शवों की ठीक पहचान होना ज़रूरी है. पुलिस के अनुसार धमाकों में मारे गए लोगों की संख्या अब 49 हो गई है और अब भी 20 से ज़्यादा लोग लापता हैं. पुलिस के अनुसार लंदन पर हमले योजनाबद्ध तरीके से हुए और भूमिगत रेलगाड़ियों में विस्फोट तो 50 सैकेंड के भीतर हुए. पहचान की प्रक्रिया किंग्स क्रॉस स्टेशन के पास सुरंग में धमाके से क्षतिग्रस्त रेल कोचों में अब भी पड़े शवों के बारे में स्थिति साफ नहीं है और पुलिस ने आगाह किया है कि हताहतों की संख्या बढ़ सकती है. पुलिस, न्यायिक अधिकारियों, चिकित्सकों और अन्य विशेषज्ञों का एक आयोग बनाया गया है जिसकी देख-रेख में शवों की पहचान की प्रक्रिया पूरी होगी. महत्वपूर्ण है कि गुरुवार को लंदन में मृतकों के सम्मान में सार्वजनिक समारोह होगा जिसमें उन्हें श्रद्धाँजलि दी जाएगी और दो मिनट का मौन रखा जाएगा. पुलिस का कहना है कि वह इस्लामी चरमपंथी संगठन अबू हफ़्स अल-मासरी ब्रिगेड्स का इन धमाकों के बारे में किए गए दावे को गंभीरता से ले रही है. 'धमाके पचास सेकेंड में हुए' पुलिस के अनुसार लंदन पर हुए हमले योजनाबद्ध तरीके से किए गए थे. पुलिस के अनुसार भूमिगत रेलों में तीन बम अलग-अलग जगहों लगभग अचानक फटे थे. इसलिए पुलिस को संदेह है कि धमाकों में टाइमिंग उपकरणों का उपयोग किया गया होगा. यह भी तथ्य सामने आया है कि धमाकों में आधुनिक किस्म के विस्फोटकों का उपयोग किया गया था. पहले माना जा रहा था कि हमलों में देसी विस्फोटकों का इस्तेमाल किया गया होगा. पुलिस मुख्यालय स्कॉटलैंड यार्ड में सहायक उपायुक्त ब्रायन पैडिक ने बताया कि भूमिगत रेलों में तीन धमाके 50 सेकेंड के अंतराल के भीतर हुए थे. धमाके गुरुवार सुबह आठ बजकर पचास मिनट पर हुए थे. लंदन अंडरग्राउंड से मिली विस्तृत तकनीकी सूचनाओं से यह तथ्य सामने आया है. इससे पहले बम धमाकों में अच्छे खासे अंतराल की बात की जा रही थी.</s> <2hi> --- IndicBART-XLSum is a multilingual separate script [IndicBART](https://huggingface.co/ai4bharat/IndicBARTSS) based, sequence-to-sequence pre-trained model focusing on Indic languages. It currently supports 7 Indian languages and is based on the mBART architecture. Some salient features of the IndicBART-XLSum are: <ul> <li >Supported languages: Bengali, Gujarati, Hindi, Marathi, Punjabi, Tamil and Telugu. Not all of these languages are supported by mBART50 and mT5. </li> <li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li> <li> Trained on Indic portion of <a href="https://huggingface.co/datasets/csebuetnlp/xlsum">XLSum corpora</a>. </li> <li> Each language is written in its own script, so you do not need to perform any script mapping to/from Devanagari. </li> </ul> You can read about IndicBARTSS in this <a href="https://arxiv.org/abs/2109.02903">paper</a>. # Usage: ``` from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM from transformers import AlbertTokenizer, AutoTokenizer tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/IndicBART-XLSum", do_lower_case=False, use_fast=False, keep_accents=True) # Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/IndicBART-XLSum", do_lower_case=False, use_fast=False, keep_accents=True) model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/IndicBART-XLSum") # Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/IndicBART-XLSum") # Some initial mapping bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>") eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>") pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>") # To get lang_id use any of ['<2bn>', '<2gu>', '<2hi>', '<2mr>', '<2pa>', '<2ta>', '<2te>'] # First tokenize the input and outputs. The format below is how IndicBART-XLSum was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>". inp = tokenizer("टेसा जॉवल का कहना है कि मृतकों और लापता लोगों के परिजनों की मदद के लिए एक केंद्र स्थापित किया जा रहा है. उन्होंने इस हादसे के तीन के बाद भी मृतकों की सूची जारी करने में हो रही देरी के बारे में स्पष्टीकरण देते हुए कहा है शवों की ठीक पहचान होना ज़रूरी है. पुलिस के अनुसार धमाकों में मारे गए लोगों की संख्या अब 49 हो गई है और अब भी 20 से ज़्यादा लोग लापता हैं. पुलिस के अनुसार लंदन पर हमले योजनाबद्ध तरीके से हुए और भूमिगत रेलगाड़ियों में विस्फोट तो 50 सैकेंड के भीतर हुए. पहचान की प्रक्रिया किंग्स क्रॉस स्टेशन के पास सुरंग में धमाके से क्षतिग्रस्त रेल कोचों में अब भी पड़े शवों के बारे में स्थिति साफ नहीं है और पुलिस ने आगाह किया है कि हताहतों की संख्या बढ़ सकती है. पुलिस, न्यायिक अधिकारियों, चिकित्सकों और अन्य विशेषज्ञों का एक आयोग बनाया गया है जिसकी देख-रेख में शवों की पहचान की प्रक्रिया पूरी होगी. महत्वपूर्ण है कि गुरुवार को लंदन में मृतकों के सम्मान में सार्वजनिक समारोह होगा जिसमें उन्हें श्रद्धाँजलि दी जाएगी और दो मिनट का मौन रखा जाएगा. पुलिस का कहना है कि वह इस्लामी चरमपंथी संगठन अबू हफ़्स अल-मासरी ब्रिगेड्स का इन धमाकों के बारे में किए गए दावे को गंभीरता से ले रही है. 'धमाके पचास सेकेंड में हुए' पुलिस के अनुसार लंदन पर हुए हमले योजनाबद्ध तरीके से किए गए थे. पुलिस के अनुसार भूमिगत रेलों में तीन बम अलग-अलग जगहों लगभग अचानक फटे थे. इसलिए पुलिस को संदेह है कि धमाकों में टाइमिंग उपकरणों का उपयोग किया गया होगा. यह भी तथ्य सामने आया है कि धमाकों में आधुनिक किस्म के विस्फोटकों का उपयोग किया गया था. पहले माना जा रहा था कि हमलों में देसी विस्फोटकों का इस्तेमाल किया गया होगा. पुलिस मुख्यालय स्कॉटलैंड यार्ड में सहायक उपायुक्त ब्रायन पैडिक ने बताया कि भूमिगत रेलों में तीन धमाके 50 सेकेंड के अंतराल के भीतर हुए थे. धमाके गुरुवार सुबह आठ बजकर पचास मिनट पर हुए थे. लंदन अंडरग्राउंड से मिली विस्तृत तकनीकी सूचनाओं से यह तथ्य सामने आया है. इससे पहले बम धमाकों में अच्छे खासे अंतराल की बात की जा रही थी.</s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids out = tokenizer("<2hi>परिजनों की मदद की ज़िम्मेदारी मंत्री पर </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:]) # For loss model_outputs.loss ## This is not label smoothed. # For logits model_outputs.logits # For generation. Pardon the messiness. Note the decoder_start_token_id. model.eval() # Set dropouts to zero model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>")) # Decode to get output strings decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(decoded_output) # लंदन धमाकों में मारे गए लोगों की सूची जारी ``` # Benchmarks Scores on the `IndicBART-XLSum` test sets are as follows: Language | Rouge-1 / Rouge-2 / Rouge-L ---------|---------------------------- bn | 0.172331 / 0.051777 / 0.160245 gu | 0.143240 / 0.039993 / 0.133981 hi | 0.220394 / 0.065464 / 0.198816 mr | 0.172568 / 0.062591 / 0.160403 pa | 0.218274 / 0.066087 / 0.192010 ta | 0.177317 / 0.058636 / 0.166324 te | 0.156386 / 0.041042 / 0.144179 average | 0.180073 / 0.055084 / 0.165137 # Notes: 1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible. 2. While I have only shown how to get logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration 3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class.
DOOGLAK/Tagged_One_100v1_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T14:13:25Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one100v1_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T14:08:11Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one100v1_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_100v1_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one100v1_wikigold_split type: tagged_one100v1_wikigold_split args: default metrics: - name: Precision type: precision value: 0.23249893932965635 - name: Recall type: recall value: 0.14241164241164242 - name: F1 type: f1 value: 0.17663174858984693 - name: Accuracy type: accuracy value: 0.8347454643603164 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_100v1_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v1_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.4613 - Precision: 0.2325 - Recall: 0.1424 - F1: 0.1766 - Accuracy: 0.8347 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 39 | 0.5179 | 0.1311 | 0.0398 | 0.0610 | 0.8044 | | No log | 2.0 | 78 | 0.4609 | 0.2297 | 0.1351 | 0.1702 | 0.8327 | | No log | 3.0 | 117 | 0.4613 | 0.2325 | 0.1424 | 0.1766 | 0.8347 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
miguelwon/xlm-roberta-base-finetuned-panx-de
miguelwon
2022-08-11T14:08:55Z
106
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T12:47:00Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: train args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8615332274892267 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1375 - F1: 0.8615 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 525 | 0.1795 | 0.8092 | | No log | 2.0 | 1050 | 0.1360 | 0.8490 | | No log | 3.0 | 1575 | 0.1375 | 0.8615 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.13.0.dev20220808 - Datasets 2.4.0 - Tokenizers 0.12.1
DOOGLAK/Tagged_One_100v0_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T14:07:39Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one100v0_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T14:02:55Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one100v0_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_100v0_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one100v0_wikigold_split type: tagged_one100v0_wikigold_split args: default metrics: - name: Precision type: precision value: 0.16896060749881348 - name: Recall type: recall value: 0.08985360928823827 - name: F1 type: f1 value: 0.11731751524139067 - name: Accuracy type: accuracy value: 0.8183405097172117 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_100v0_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v0_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.4700 - Precision: 0.1690 - Recall: 0.0899 - F1: 0.1173 - Accuracy: 0.8183 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 32 | 0.5975 | 0.1034 | 0.0015 | 0.0030 | 0.7790 | | No log | 2.0 | 64 | 0.4756 | 0.1607 | 0.0765 | 0.1036 | 0.8137 | | No log | 3.0 | 96 | 0.4700 | 0.1690 | 0.0899 | 0.1173 | 0.8183 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_One_50v8_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T13:57:11Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one50v8_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T13:52:19Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one50v8_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_50v8_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one50v8_wikigold_split type: tagged_one50v8_wikigold_split args: default metrics: - name: Precision type: precision value: 0.09166666666666666 - name: Recall type: recall value: 0.0053868756121449556 - name: F1 type: f1 value: 0.010175763182238666 - name: Accuracy type: accuracy value: 0.7848874958020822 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_50v8_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v8_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.5935 - Precision: 0.0917 - Recall: 0.0054 - F1: 0.0102 - Accuracy: 0.7849 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 19 | 0.7198 | 0.0 | 0.0 | 0.0 | 0.7786 | | No log | 2.0 | 38 | 0.6263 | 0.0727 | 0.0010 | 0.0019 | 0.7798 | | No log | 3.0 | 57 | 0.5935 | 0.0917 | 0.0054 | 0.0102 | 0.7849 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_One_50v7_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T13:51:43Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one50v7_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T13:46:46Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one50v7_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_50v7_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one50v7_wikigold_split type: tagged_one50v7_wikigold_split args: default metrics: - name: Precision type: precision value: 0.0 - name: Recall type: recall value: 0.0 - name: F1 type: f1 value: 0.0 - name: Accuracy type: accuracy value: 0.7785234899328859 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_50v7_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v7_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6441 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.7785 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 13 | 0.7609 | 0.0 | 0.0 | 0.0 | 0.7783 | | No log | 2.0 | 26 | 0.6742 | 0.0 | 0.0 | 0.0 | 0.7783 | | No log | 3.0 | 39 | 0.6441 | 0.0 | 0.0 | 0.0 | 0.7785 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
DOOGLAK/Tagged_One_50v6_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T13:46:19Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one50v6_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T13:41:18Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one50v6_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_50v6_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one50v6_wikigold_split type: tagged_one50v6_wikigold_split args: default metrics: - name: Precision type: precision value: 0.0625 - name: Recall type: recall value: 0.0004854368932038835 - name: F1 type: f1 value: 0.0009633911368015415 - name: Accuracy type: accuracy value: 0.7775310137514861 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_50v6_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v6_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6728 - Precision: 0.0625 - Recall: 0.0005 - F1: 0.0010 - Accuracy: 0.7775 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 16 | 0.7728 | 0.0 | 0.0 | 0.0 | 0.7773 | | No log | 2.0 | 32 | 0.6898 | 0.04 | 0.0002 | 0.0005 | 0.7774 | | No log | 3.0 | 48 | 0.6728 | 0.0625 | 0.0005 | 0.0010 | 0.7775 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
mrm8488/Worm_v2
mrm8488
2022-08-11T13:35:34Z
10
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Worm", "region:us" ]
reinforcement-learning
2022-08-11T13:35:19Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Worm library_name: ml-agents --- # **ppo** Agent playing **Worm** This is a trained model of a **ppo** agent playing **Worm** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Worm 2. Step 1: Write your model_id: mrm8488/Worm_v2 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
DOOGLAK/Tagged_One_50v3_NER_Model_3Epochs_AUGMENTED
DOOGLAK
2022-08-11T13:30:37Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:tagged_one50v3_wikigold_split", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-08-11T13:26:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tagged_one50v3_wikigold_split metrics: - precision - recall - f1 - accuracy model-index: - name: Tagged_One_50v3_NER_Model_3Epochs_AUGMENTED results: - task: name: Token Classification type: token-classification dataset: name: tagged_one50v3_wikigold_split type: tagged_one50v3_wikigold_split args: default metrics: - name: Precision type: precision value: 0.13106796116504854 - name: Recall type: recall value: 0.006622516556291391 - name: F1 type: f1 value: 0.012607985057202896 - name: Accuracy type: accuracy value: 0.7834701450579107 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_50v3_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v3_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6197 - Precision: 0.1311 - Recall: 0.0066 - F1: 0.0126 - Accuracy: 0.7835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 14 | 0.7544 | 0.0 | 0.0 | 0.0 | 0.7789 | | No log | 2.0 | 28 | 0.6444 | 0.0746 | 0.0025 | 0.0047 | 0.7818 | | No log | 3.0 | 42 | 0.6197 | 0.1311 | 0.0066 | 0.0126 | 0.7835 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6