modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-15 12:29:39
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
521 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-15 12:28:52
card
stringlengths
11
1.01M
Mark-Cooper/my_aime_gpt2_clm-model
Mark-Cooper
2023-02-17T15:09:47Z
4
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-02-17T12:24:15Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: Mark-Cooper/my_aime_gpt2_clm-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Mark-Cooper/my_aime_gpt2_clm-model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.0541 - Validation Loss: 2.8548 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.3688 | 3.7021 | 0 | | 3.5122 | 3.1472 | 1 | | 3.0541 | 2.8548 | 2 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.9.0 - Tokenizers 0.13.2
jjanarek/sd-class-butterflies-32
jjanarek
2023-02-17T15:01:20Z
0
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-02-17T15:00:59Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('jjanarek/sd-class-butterflies-32') image = pipeline().images[0] image ```
jannikskytt/ppo-implemented-LunarLander-v2
jannikskytt
2023-02-17T14:59:36Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T10:41:59Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -105.68 +/- 49.60 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo2' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 3500000 'learning_rate': 0.00025 'num_envs': 16 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'jannikskytt/ppo-implemented-LunarLander-v2' 'batch_size': 2048 'minibatch_size': 512} ```
MunSu/xlm-roberta-base-finetuned-panx-all
MunSu
2023-02-17T14:55:36Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-02-15T10:57:08Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [MunSu/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/MunSu/xlm-roberta-base-finetuned-panx-de) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2643 - F1: 0.8601 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1634 | 1.0 | 2503 | 0.2532 | 0.8289 | | 0.1004 | 2.0 | 5006 | 0.2586 | 0.8541 | | 0.0576 | 3.0 | 7509 | 0.2643 | 0.8601 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.8.0 - Datasets 2.9.0 - Tokenizers 0.13.2
Andrazp/multilingual-hate-speech-robacofi
Andrazp
2023-02-17T14:52:29Z
116
2
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "en", "arxiv:2104.12250", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-14T09:16:05Z
--- widget: - text: "My name is Mark and I live in London. I am a postgraduate student at Queen Mary University." language: - en license: mit --- # Multilingual Hate Speech Classifier for Social Media Content A multilingual model for hate speech classification of social media content. The model is based on pre-trained multilingual representations from the XLM-T model (https://arxiv.org/abs/2104.12250) and was jointly fine-tuned on five languages, namely Arabic, Croatian, English, German and Slovenian. The test results on these five languages in terms of F1 score are as follows: | Language | F1 | |-----------|:------:| | Arabic | 0.8704 | | Croatian | 0.7226 | | English | 0.7851 | | German | 0.7826 | | Slovenian | 0.7596 | ## Tokenizer During training the text was preprocessed using the original XLM-T tokenizer. The pretrained tokenizer files are included in this repository. We suggest the same tokenizer is used for inference. ## Model output The model classifies each input into one of two distinct classes: * 0 - not-offensive * 1 - offensive ## Acknowledgments The authors acknowledge the financial support from the RobaCOFI project, which has indirectly received funding from the European Union’s Horizon 2020 research and innovation action programme via the AI4Media Open Call #1 issued and executed under the AI4Media project (Grant Agreement no. 951911), and from from the Slovenian Research Agency for the the project Hate speech in contemporary conceptualizations of nationalism, racism, gender and migration (J5-3102).
selvino/Reinforce-PixelcopterPLE-v0
selvino
2023-02-17T14:48:36Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T14:48:13Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelcopterPLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 17.90 +/- 14.69 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Muffins987/amazon-xlnet-large-2
Muffins987
2023-02-17T14:45:33Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlnet", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-17T12:13:28Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: amazon-xlnet-large-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amazon-xlnet-large-2 This model is a fine-tuned version of [xlnet-large-cased](https://huggingface.co/xlnet-large-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5627 - Accuracy: 0.7763 - F1: 0.7702 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.5505 | 1.0 | 20000 | 0.5627 | 0.7763 | 0.7702 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Dunkindont/Foto-Assisted-Diffusion-FAD_V0
Dunkindont
2023-02-17T14:22:20Z
135
170
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "safetensors", "artwork", "HDR photography", "photos", "en", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-02-10T23:22:33Z
--- license: creativeml-openrail-m language: - en tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - safetensors - diffusers - artwork - HDR photography - safetensors - photos inference: true --- # Foto Assisted Diffusion (FAD)_V0 This model is meant to mimic a modern HDR photography style It was trained on 600 HDR images on SD1.5 and works best at **768x768** resolutions Merged with one of my own models for illustrations and drawings, to increase flexibility # Features: * **No additional licensing** * **Multi-resolution support** * **HDR photographic outputs** * **No Hi-Res fix required** * [**Spreadsheet with supported resolutions, keywords for prompting and other useful hints/tips**](https://docs.google.com/spreadsheets/d/1RGRLZhgiFtLMm5Pg8qK0YMc6wr6uvj9-XdiFM877Pp0/edit#gid=364842308) # Example Cards: Below you will find some example cards that this model is capable of outputting. You can acquire the images used here: [HF](https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/tree/main/Model%20Examples) or [Google Drive](https://docs.google.com/spreadsheets/d/1RGRLZhgiFtLMm5Pg8qK0YMc6wr6uvj9-XdiFM877Pp0/edit#gid=364842308). Google Drive gives you them all at once without needing to clone the repo, which is easier. If you decide to clone it, set ``` GIT_LFS_SKIP_SMUDGE=1 ``` to skip downloading large files Place them into an EXIF viewer such as the built in "PNG Info" tab in the popular Auto1111 repository to quickly copy the parameters and replicate them! ## 768x768 Food <img src="https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/resolve/main/768x768%20Food.jpg" style="max-width: 800px;" width="100%"/> ## 768x768 Landscapes <img src="https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/resolve/main/768x768%20Landscapes.jpg" style="max-width: 800px;" width="100%"/> ## 768x768 People <img src="https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/resolve/main/768x768%20People.jpg" style="max-width: 800px;" width="100%"/> ## 768x768 Random <img src="https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/resolve/main/768x768%20Random.jpg" style="max-width: 800px;" width="100%"/> ## 512x512 Artwork <img src="https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/resolve/main/512x512%20Artwork.jpg" style="max-width: 800px;" width="100%"/> ## 512x512 Photos <img src="https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/resolve/main/512x512%20Photo.jpg" style="max-width: 800px;" width="100%"/> ## Cloud Support Sinkin kindly hosted our model. [Click here to run it on the cloud](https://sinkin.ai/m/V6vYoaL)! ## License *My motivation for making this model was to have a free, non-restricted model for the community to use and for startups.* *I was noticing the models people gravitated towards, were merged models which had prior license requirements from the people who trained them.* *This was just a fun project I put together for you guys.* *My fun ended when I posted the results :D* *Enjoy! Sharing is caring :)*
Alex48/ppo-Huggy
Alex48
2023-02-17T14:18:04Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-02-17T14:17:57Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: Alex48/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
junnyu/demo_test_v2
junnyu
2023-02-17T14:11:41Z
0
0
null
[ "paddlepaddle", "stable-diffusion", "stable-diffusion-ppdiffusers", "text-to-image", "ppdiffusers", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-02-17T09:31:42Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-ppdiffusers - text-to-image - ppdiffusers - lora inference: false --- # LoRA DreamBooth - junnyu/demo_test_v2 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
Belethor/mt5-small-finetuned-amazon-en-fr
Belethor
2023-02-17T14:07:32Z
3
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-02-17T09:36:54Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Belethor/mt5-small-finetuned-amazon-en-fr results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Belethor/mt5-small-finetuned-amazon-en-fr This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 9.0466 - Validation Loss: 4.0067 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 20496, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 9.0466 | 4.0067 | 0 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.9.0 - Tokenizers 0.13.2
Glac1er/Glac1loraRest
Glac1er
2023-02-17T14:04:51Z
0
0
null
[ "anime", "region:us" ]
null
2023-02-17T14:03:31Z
--- tags: - anime --- NOTICE: My LoRAs require high amount of tags to look good, I will fix this later on and update all of my LoRAs if everything works out. # General Information - [Overview](#overview) - [Installation](#installation) - [Usage](#usage) - [SocialMedia](#socialmedia) - [Plans for the future](#plans-for-the-future) # Overview Welcome to the place where I host my LoRAs. In short, LoRA is just a checkpoint trained on specific artstyle/subject that you load into your WebUI, that can be used with other models. Although you can use it with any model, the effects of LoRA will vary between them. Most of the previews use models that come from [WarriorMama777](https://huggingface.co/WarriorMama777/OrangeMixs) . For more information about them, you can visit the original LoRA repository: https://github.com/cloneofsimo/lora Every images posted here, or on the other sites have metadata in them that you can use in PNG Info tab in your WebUI to get access to the prompt of the image. Everything I do here is for free of charge! I don't guarantee that my LoRAs will give you good results, if you think they are bad, don't use them. # Installation To use them in your WebUI, please install the extension linked under, following the installation guide: https://github.com/kohya-ss/sd-webui-additional-networks#installation # Usage All of my LoRAs are to be used with their original danbooru tag. For example: ``` asuna \(blue archive\) ``` My LoRAs will have sufixes that will tell you how much they were trained. Either by using words like "soft" and "hard", where soft stands for lower amount of training and hard for higher amount of training. More trained LoRA is harder to modify but provides higher consistency in details and original outfits, while lower trained one will be more flexible, but may get details wrong. All the LoRAs that aren't marked with PRUNED require tagging everything about the character to get the likness of it. You have to tag every part of the character like: eyes,hair,breasts,accessories,special features,etc... In theory, this should allow LoRAs to be more flexible, but it requires to prompt those things always, because character tag doesn't have those features baked into it. From 1/16 I will test releasing pruned versions which will not require those prompting those things. The usage of them is also explained in this guide: https://github.com/kohya-ss/sd-webui-additional-networks#how-to-use # SocialMedia Here are some places where you can find my other stuff that I post, or if you feel like buying me a coffee: [Twitter](https://twitter.com/Trauter8) [Pixiv](https://www.pixiv.net/en/users/88153216) [Buymeacoffee](https://www.buymeacoffee.com/Trauter) # Plans for the future - Remake all of my LoRAs into pruned versions which will be more user friendly and easier to use, and use 768x768 res. for training and better Learning Rate - After finishing all of my LoRA that I want to make, go over the old ones and try to make them better. - Accept suggestions for almost every character. - Maybe get motivation to actually tag outfits. # LoRAs - [Genshin Impact](#genshin-impact) - [Eula](#eula) - [Barbara](#barbara) - [Diluc](#diluc) - [Mona](#mona) - [Rosaria](#rosaria) - [Yae Miko](#yae-miko) - [Raiden Shogun](#raiden-shogun) - [Kujou Sara](#kujou-sara) - [Shenhe](#shenhe) - [Yelan](#yelan) - [Jean](#jean) - [Lisa](#lisa) - [Zhongli](#zhongli) - [Yoimiya](#yoimiya) - [Blue Archive](#blue-archive) - [Rikuhachima Aru](#rikuhachima-aru) - [Ichinose Asuna](#ichinose-asuna) - [Fate Grand Order](#fate-grand-order) - [Minamoto-no-Raikou](#minamoto-no-raikou) - [Misc. Characters](#misc.-characters) - [Aponia](#aponia) - [Reisalin Stout](#reisalin-stout) - [Artstyles](#artstyles) - [Pozer](#pozer) # Genshin Impact - # Eula [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/1.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/1.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, eula \(genshin impact\), 1girl, solo, thighhighs, weapon, gloves, breasts, sword, hairband, necktie, holding, leotard, bangs, greatsword, cape, thighs, boots, blue hair, looking at viewer, arms up, vision (genshin impact), medium breasts, holding sword, long sleeves, holding weapon, purple eyes, medium hair, copyright name, hair ornament, thigh boots, black leotard, black hairband, blue necktie, black thighhighs, yellow eyes, closed mouth Negative prompt: (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 2010519914, Size: 512x768, Model hash: a87fd7da, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, Hires upscale: 1.8, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305293076) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Genshin-Impact/Eula) - # Barbara [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/bar.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/bar.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, eula \(genshin impact\), 1girl, solo, thighhighs, weapon, gloves, breasts, sword, hairband, necktie, holding, leotard, bangs, greatsword, cape, thighs, boots, blue hair, looking at viewer, arms up, vision (genshin impact), medium breasts, holding sword, long sleeves, holding weapon, purple eyes, medium hair, copyright name, hair ornament, thigh boots, black leotard, black hairband, blue necktie, black thighhighs, yellow eyes, closed mouth Negative prompt: (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 2010519914, Size: 512x768, Model hash: a87fd7da, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, Hires upscale: 1.8, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305435137) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Genshin-Impact/Barbara) - # Diluc [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/dil.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/dil.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, eula \(genshin impact\), 1girl, solo, thighhighs, weapon, gloves, breasts, sword, hairband, necktie, holding, leotard, bangs, greatsword, cape, thighs, boots, blue hair, looking at viewer, arms up, vision (genshin impact), medium breasts, holding sword, long sleeves, holding weapon, purple eyes, medium hair, copyright name, hair ornament, thigh boots, black leotard, black hairband, blue necktie, black thighhighs, yellow eyes, closed mouth Negative prompt: (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 2010519914, Size: 512x768, Model hash: a87fd7da, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, Hires upscale: 1.8, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305427945) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Genshin-Impact/Diluc) - # Mona [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/mon.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/mon.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, eula \(genshin impact\), 1girl, solo, thighhighs, weapon, gloves, breasts, sword, hairband, necktie, holding, leotard, bangs, greatsword, cape, thighs, boots, blue hair, looking at viewer, arms up, vision (genshin impact), medium breasts, holding sword, long sleeves, holding weapon, purple eyes, medium hair, copyright name, hair ornament, thigh boots, black leotard, black hairband, blue necktie, black thighhighs, yellow eyes, closed mouth Negative prompt: (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 2010519914, Size: 512x768, Model hash: a87fd7da, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, Hires upscale: 1.8, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305428050) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Genshin-Impact/Mona) - # Rosaria [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/ros.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/ros.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, eula \(genshin impact\), 1girl, solo, thighhighs, weapon, gloves, breasts, sword, hairband, necktie, holding, leotard, bangs, greatsword, cape, thighs, boots, blue hair, looking at viewer, arms up, vision (genshin impact), medium breasts, holding sword, long sleeves, holding weapon, purple eyes, medium hair, copyright name, hair ornament, thigh boots, black leotard, black hairband, blue necktie, black thighhighs, yellow eyes, closed mouth Negative prompt: (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 2010519914, Size: 512x768, Model hash: a87fd7da, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, Hires upscale: 1.8, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305428015) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Genshin-Impact/Rosaria) - # Yae Miko [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/yae.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/yae.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, eula \(genshin impact\), 1girl, solo, thighhighs, weapon, gloves, breasts, sword, hairband, necktie, holding, leotard, bangs, greatsword, cape, thighs, boots, blue hair, looking at viewer, arms up, vision (genshin impact), medium breasts, holding sword, long sleeves, holding weapon, purple eyes, medium hair, copyright name, hair ornament, thigh boots, black leotard, black hairband, blue necktie, black thighhighs, yellow eyes, closed mouth Negative prompt: (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 2010519914, Size: 512x768, Model hash: a87fd7da, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, Hires upscale: 1.8, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305448948) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Genshin-Impact/yae%20miko) - # Raiden Shogun - [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/ra.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/ra.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, raiden shogun, 1girl, breasts, solo, cleavage, kimono, bangs, sash, mole, obi, tassel, blush, large breasts, purple eyes, japanese clothes, long hair, looking at viewer, hand on own chest, hair ornament, purple hair, bridal gauntlets, closed mouth, purple kimono, blue hair, mole under eye, shoulder armor, long sleeves, wide sleeves, mitsudomoe (shape), tomoe (symbol), cowboy shot Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, from behind Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 4.5, Seed: 2544310848, Size: 704x384, Model hash: 2bba3136, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.05, Hires upscaler: 4x_foolhardy_Remacri </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305313633) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Genshin-Impact/Raiden%20Shogun) - # Kujou Sara - [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/ku.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/ku.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, kujou sara, 1girl, solo, mask, gloves, bangs, bodysuit, gradient, sidelocks, signature, yellow eyes, bird mask, mask on head, looking at viewer, short hair, black hair, detached sleeves, simple background, japanese clothes, black gloves, black bodysuit, wide sleeves, white background, upper body, gradient background, closed mouth, hair ornament, artist name, elbow gloves Negative prompt: (worst quality, low quality:1.4) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 3966121353, Size: 512x768, Model hash: 931f9552, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 1.8, Hires steps: 20, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305311498) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Genshin-Impact/Kujou%20Sara) - # Shenhe - [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/sh.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/sh.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, shenhe \(genshin impact\), 1girl, solo, breasts, bodysuit, tassel, gloves, bangs, braid, outdoors, bird, jewelry, earrings, sky, breast curtain, long hair, hair over one eye, covered navel, blue eyes, looking at viewer, hair ornament, large breasts, shoulder cutout, clothing cutout, very long hair, hip vent, braided ponytail, partially fingerless gloves, black bodysuit, tassel earrings, black gloves, gold trim, cowboy shot, white hair Negative prompt: (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 22, Sampler: DPM++ SDE Karras, CFG scale: 6.5, Seed: 573332187, Size: 512x768, Model hash: a87fd7da, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305307599) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Genshin-Impact/Shenhe) - # Yelan - [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/10.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/10.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, yelan \(genshin impact\), 1girl, breasts, solo, bangs, armpits, smile, sky, cleavage, jewelry, gloves, jacket, dice, mole, cloud, grin, dress, blush, earrings, thighs, tassel, sleeveless, day, outdoors, large breasts, looking at viewer, green eyes, arms up, short hair, blue hair, vision (genshin impact), fur trim, white jacket, blue sky, mole on breast, arms behind head, bob cut, multicolored hair, black hair, fur-trimmed jacket, elbow gloves, bare shoulders, blue dress, parted lips, diagonal bangs, clothing cutout, pelvic curtain, asymmetrical gloves Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name Steps: 23, Sampler: DPM++ SDE Karras, CFG scale: 6.5, Seed: 575500509, Size: 512x768, Model hash: a87fd7da, Denoising strength: 0.58, Clip skip: 2, ENSD: 31337, Hires upscale: 2.4, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305296897) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Genshin-Impact/Yelan) - # Jean - [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/333.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/333.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, jean \(genshin impact\), 1girl, breasts, solo, cleavage, strapless, smile, ponytail, bangs, jewelry, earrings, bow, capelet, signature, sidelocks, cape, corset, shiny, blonde hair, long hair, upper body, detached sleeves, purple eyes, hair between eyes, hair bow, parted lips, looking to the side, large breasts, detached collar, medium breasts, blue capelet, white background, black bow, blue eyes, bare shoulders, simple background Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 22, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 32930253, Size: 512x768, Model hash: ffa7b160, Denoising strength: 0.59, Clip skip: 2, ENSD: 31337, Hires upscale: 1.85, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305307594) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Genshin-Impact/Jean) - # Lisa [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/lis.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/lis.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, lisa \(genshin impact\), 1girl, solo, hat, breasts, gloves, cleavage, flower, smile, bangs, dress, rose, jewelry, witch, capelet, green eyes, witch hat, brown hair, purple headwear, looking at viewer, white background, large breasts, long hair, simple background, black gloves, purple flower, hair between eyes, upper body, purple rose, parted lips, purple capelet, hat flower, multicolored dress, hair ornament, multicolored clothes, vision (genshin impact) Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, worst quality, low quality, extra digits, loli, loli face Steps: 23, Sampler: DPM++ SDE Karras, CFG scale: 6.5, Seed: 350134479, Size: 512x768, Model hash: ffa7b160, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, Hires upscale: 1.85, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305290865) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Genshin-Impact/Lisa) - # Zhongli [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/zho.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/zho.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, zhongli \(genshin impact\), solo, 1boy, bangs, jewelry, tassel, earrings, ponytail, low ponytail, gloves, necktie, jacket, shirt, formal, petals, suit, makeup, eyeliner, eyeshadow, male focus, long hair, brown hair, multicolored hair, long sleeves, tassel earrings, single earring, collared shirt, hair between eyes, black gloves, closed mouth, yellow eyes, gradient hair, orange hair, simple background Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, worst quality, low quality, extra digits, loli, loli face Steps: 22, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 88418604, Size: 512x768, Model hash: a87fd7da, Denoising strength: 0.58, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305311423) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Genshin-Impact/Zhongli) - # Yoimiya [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/Yoi.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/Yoi.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, eula \(genshin impact\), 1girl, solo, thighhighs, weapon, gloves, breasts, sword, hairband, necktie, holding, leotard, bangs, greatsword, cape, thighs, boots, blue hair, looking at viewer, arms up, vision (genshin impact), medium breasts, holding sword, long sleeves, holding weapon, purple eyes, medium hair, copyright name, hair ornament, thigh boots, black leotard, black hairband, blue necktie, black thighhighs, yellow eyes, closed mouth Negative prompt: (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 2010519914, Size: 512x768, Model hash: a87fd7da, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, Hires upscale: 1.8, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305448498) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Genshin-Impact/Yoimiya) # Blue Archive - # Rikuhachima Aru - [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/22.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/22.png) <details> <summary>Sample Prompt</summary> <pre> aru \(blue archive\), masterpiece, best quality, 1girl, solo, horns, skirt, gloves, shirt, halo, window, breasts, blush, sweatdrop, ribbon, coat, bangs, :d, smile, indoors, standing, plant, thighs, sweat, jacket, day, sunlight, long hair, white shirt, white gloves, black skirt, looking at viewer, open mouth, long sleeves, red ribbon, fur trim, neck ribbon, red hair, fur-trimmed coat, collared shirt, orange eyes, medium breasts, brown coat, hands up, side slit, coat on shoulders, v-shaped eyebrows, yellow eyes, potted plant, fur collar, shirt tucked in, demon horns, high-waist skirt, dress shirt Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 22, Sampler: DPM++ SDE Karras, CFG scale: 6.5, Seed: 1190296645, Size: 512x768, Model hash: ffa7b160, Denoising strength: 0.58, Clip skip: 2, ENSD: 31337, Hires upscale: 1.85, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305293051) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Blue-Archive/Rikuhachima%20Aru) - # Ichinose Asuna - [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/asu.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/asu.png) <details> <summary>Sample Prompt</summary> <pre> photorealistic, (hyperrealistic:1.2), (extremely detailed CG unity 8k wallpaper), (ultra-detailed), (mature female:1.2), masterpiece, best quality, asuna \(blue archive\), 1girl, breasts, solo, gloves, pantyhose, ass, leotard, smile, tail, halo, grin, blush, bangs, sideboob, highleg, standing, mole, strapless, ribbon, thighs, animal ears, playboy bunny, rabbit ears, long hair, white gloves, very long hair, large breasts, high heels, blue leotard, hair over one eye, fake animal ears, blue eyes, looking at viewer, white footwear, rabbit tail, official alternate costume, full body, elbow gloves, simple background, white background, absurdly long hair, bare shoulders, detached collar, thighband pantyhose, leaning forward, highleg leotard, strapless leotard, hair ribbon, brown pantyhose, black pantyhose, mole on breast, light brown hair, brown hair, looking back, fake tail Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 22, Sampler: DPM++ SDE Karras, CFG scale: 6.5, Seed: 2052579935, Size: 512x768, Model hash: ffa7b160, Clip skip: 2, ENSD: 31337 </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305292996) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Blue-Archive/Ichinose%20Asuna) # Fate Grand Order - # Minamoto-no-Raikou - [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/3.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/3.png) <details> <summary>Sample Prompt</summary> <pre> mature female, masterpiece, best quality, minamoto no raikou \(fate\), 1girl, breasts, solo, bodysuit, gloves, bangs, smile, rope, heart, blush, thighs, armor, kote, long hair, purple hair, fingerless gloves, purple eyes, large breasts, very long hair, looking at viewer, parted bangs, ribbed sleeves, black gloves, arm guards, covered navel, low-tied long hair, purple bodysuit, japanese armor Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 22, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 3383453781, Size: 512x768, Model hash: ffa7b160, Denoising strength: 0.59, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305290900) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Fate-Grand-Order/Minamoto-no-Raikou) # Misc. Characters - # Aponia [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/apo.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/apo.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, eula \(genshin impact\), 1girl, solo, thighhighs, weapon, gloves, breasts, sword, hairband, necktie, holding, leotard, bangs, greatsword, cape, thighs, boots, blue hair, looking at viewer, arms up, vision (genshin impact), medium breasts, holding sword, long sleeves, holding weapon, purple eyes, medium hair, copyright name, hair ornament, thigh boots, black leotard, black hairband, blue necktie, black thighhighs, yellow eyes, closed mouth Negative prompt: (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 2010519914, Size: 512x768, Model hash: a87fd7da, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, Hires upscale: 1.8, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305445819) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Misc.%20Characters/Aponia) - # Reisalin Stout [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/ryza.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/ryza.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, eula \(genshin impact\), 1girl, solo, thighhighs, weapon, gloves, breasts, sword, hairband, necktie, holding, leotard, bangs, greatsword, cape, thighs, boots, blue hair, looking at viewer, arms up, vision (genshin impact), medium breasts, holding sword, long sleeves, holding weapon, purple eyes, medium hair, copyright name, hair ornament, thigh boots, black leotard, black hairband, blue necktie, black thighhighs, yellow eyes, closed mouth Negative prompt: (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 2010519914, Size: 512x768, Model hash: a87fd7da, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, Hires upscale: 1.8, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305448553) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Misc.%20Characters/reisalin%20stout) # Artstyles - # Pozer [<img src="https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/art.png" width="512" height="768">](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/resolve/main/LoRA/Previews/art.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, eula \(genshin impact\), 1girl, solo, thighhighs, weapon, gloves, breasts, sword, hairband, necktie, holding, leotard, bangs, greatsword, cape, thighs, boots, blue hair, looking at viewer, arms up, vision (genshin impact), medium breasts, holding sword, long sleeves, holding weapon, purple eyes, medium hair, copyright name, hair ornament, thigh boots, black leotard, black hairband, blue necktie, black thighhighs, yellow eyes, closed mouth Negative prompt: (worst quality, low quality, extra digits, loli, loli face:1.3) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 2010519914, Size: 512x768, Model hash: a87fd7da, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, Hires upscale: 1.8, Hires upscaler: Latent (nearest-exact) </pre> </details> - [Examples](https://www.flickr.com/photos/197461145@N04/albums/72177720305445399) - [Download](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/tree/main/LoRA/Artstyles/Pozer)
lizziedearden/my_aime_gpt2_clm-model
lizziedearden
2023-02-17T13:58:08Z
4
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-02-17T12:09:15Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: lizziedearden/my_aime_gpt2_clm-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # lizziedearden/my_aime_gpt2_clm-model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.9895 - Validation Loss: 2.9968 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.3162 | 3.7395 | 0 | | 3.4696 | 3.2608 | 1 | | 2.9895 | 2.9968 | 2 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
Tritkoman/EnglishtoAncientGreekV5
Tritkoman
2023-02-17T13:56:46Z
3
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain", "translation", "unk", "dataset:Tritkoman/autotrain-data-apapaqjajq", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-02-17T13:35:19Z
--- tags: - autotrain - translation language: - unk - unk datasets: - Tritkoman/autotrain-data-apapaqjajq co2_eq_emissions: emissions: 0.10700184364056661 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 3548795734 - CO2 Emissions (in grams): 0.1070 ## Validation Metrics - Loss: 1.703 - SacreBLEU: 7.516 - Gen len: 25.710
Ithai/Reinforce-Pixelcopter-PLE-v0
Ithai
2023-02-17T13:49:21Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-02-16T15:53:28Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 39.05 +/- 37.14 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
YoaneBailiang/STL
YoaneBailiang
2023-02-17T13:29:08Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-02-17T13:29:08Z
--- license: creativeml-openrail-m ---
katkha/whisper-small-ka
katkha
2023-02-17T12:22:05Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "ka", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-02-13T11:25:36Z
--- language: - ka license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: Whisper Small ka - Davit Barbakadze results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small ka - Davit Barbakadze This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.1652 - eval_wer: 47.0800 - eval_runtime: 1493.5786 - eval_samples_per_second: 1.673 - eval_steps_per_second: 0.21 - epoch: 13.01 - step: 1000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.1.dev0 - Tokenizers 0.13.2
MunSu/xlm-roberta-base-finetuned-panx-de
MunSu
2023-02-17T12:18:59Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-02-14T23:58:31Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.fr split: validation args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8525033829499323 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [MunSu/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/MunSu/xlm-roberta-base-finetuned-panx-de) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4005 - F1: 0.8525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 500 | 0.3080 | 0.8254 | | No log | 2.0 | 1000 | 0.3795 | 0.8448 | | No log | 3.0 | 1500 | 0.4005 | 0.8525 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.8.0 - Datasets 2.9.0 - Tokenizers 0.13.2
ZhihongDeng/Reinforce-CartPole-v1
ZhihongDeng
2023-02-17T12:07:05Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T12:06:55Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
jiaoqsh/mbart-large-50-finetuned-stocks-event-1
jiaoqsh
2023-02-17T12:02:11Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "summarization", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-02-17T11:42:33Z
--- license: mit tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mbart-large-50-finetuned-stocks-event-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-50-finetuned-stocks-event-1 This model is a fine-tuned version of [jiaoqsh/mbart-large-50-finetuned-stock-dividend](https://huggingface.co/jiaoqsh/mbart-large-50-finetuned-stock-dividend) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1419 - Rouge1: 0.9120 - Rouge2: 0.8056 - Rougel: 0.9120 - Rougelsum: 0.9120 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 0.3179 | 1.0 | 20 | 0.1784 | 0.8727 | 0.7639 | 0.8704 | 0.8727 | | 0.0569 | 2.0 | 40 | 0.0822 | 0.9167 | 0.8333 | 0.9167 | 0.9144 | | 0.0284 | 3.0 | 60 | 0.1842 | 0.9120 | 0.8194 | 0.9144 | 0.9120 | | 0.0153 | 4.0 | 80 | 0.1448 | 0.9236 | 0.8472 | 0.9213 | 0.9236 | | 0.0066 | 5.0 | 100 | 0.1271 | 0.9444 | 0.875 | 0.9421 | 0.9444 | | 0.0013 | 6.0 | 120 | 0.1381 | 0.9190 | 0.8194 | 0.9213 | 0.9213 | | 0.0083 | 7.0 | 140 | 0.1414 | 0.9190 | 0.8194 | 0.9213 | 0.9213 | | 0.0002 | 8.0 | 160 | 0.1419 | 0.9120 | 0.8056 | 0.9120 | 0.9120 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
alexrink/t5-small-finetuned-xsum
alexrink
2023-02-17T11:44:18Z
7
0
transformers
[ "transformers", "tf", "tensorboard", "t5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: alexrink/t5-small-finetuned-xsum results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # alexrink/t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.6399 - Validation Loss: 6.0028 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.2, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 11.4991 | 6.9902 | 0 | | 6.5958 | 6.2502 | 1 | | 6.1443 | 6.1638 | 2 | | 5.9379 | 6.0765 | 3 | | 5.7739 | 5.9393 | 4 | | 5.7033 | 6.0061 | 5 | | 5.7070 | 5.9305 | 6 | | 5.7000 | 5.9698 | 7 | | 5.6888 | 5.9223 | 8 | | 5.6657 | 5.9773 | 9 | | 5.6827 | 5.9734 | 10 | | 5.6380 | 5.9428 | 11 | | 5.6532 | 5.9799 | 12 | | 5.6617 | 5.9974 | 13 | | 5.6402 | 5.9563 | 14 | | 5.6710 | 5.9926 | 15 | | 5.6999 | 5.9764 | 16 | | 5.6573 | 5.9557 | 17 | | 5.6297 | 5.9678 | 18 | | 5.6399 | 6.0028 | 19 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.9.0 - Tokenizers 0.13.2
Tincando/my_awesome_neo_gpt-model
Tincando
2023-02-17T11:33:12Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt_neo", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-02-16T19:00:41Z
--- license: mit tags: - generated_from_trainer model-index: - name: my_awesome_neo_gpt-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_neo_gpt-model This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4912 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.3957 | 1.0 | 1109 | 3.4908 | | 3.2523 | 2.0 | 2218 | 3.4871 | | 3.1771 | 3.0 | 3327 | 3.4912 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
danilyef/dqn-SpaceInvadersNoFrameskip-v4
danilyef
2023-02-17T11:32:00Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T11:31:13Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 717.50 +/- 348.83 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga danilyef -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga danilyef -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga danilyef ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Al3ksandra/distilbert-base-uncased-finetuned-emotion
Al3ksandra
2023-02-17T11:10:51Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-14T12:21:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - eval_loss: 0.3158 - eval_accuracy: 0.902 - eval_f1: 0.8997 - eval_runtime: 102.1735 - eval_samples_per_second: 19.575 - eval_steps_per_second: 0.313 - epoch: 1.0 - step: 250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cpu - Datasets 2.9.0 - Tokenizers 0.13.2
akoshel/Reinforce-PixelCopter
akoshel
2023-02-17T11:06:34Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T10:38:59Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 25.30 +/- 17.07 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
akadhim-ai/dilbert-comic-model-v1-2_2k
akadhim-ai
2023-02-17T10:58:58Z
3
0
diffusers
[ "diffusers", "text-to-image", "en", "dataset:Ali-fb/dilbert-comic-sample-dataset", "license:openrail", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-15T01:53:27Z
--- license: openrail datasets: - Ali-fb/dilbert-comic-sample-dataset language: - en metrics: - accuracy library_name: diffusers pipeline_tag: text-to-image --- # DreamBooth model for the Dilbert concept trained by Ali-fb/dilbert-comic-sample-dataset on the a dataset. This is a Stable Diffusion model fine-tuned on the Dilbert concept. It can be used by modifying the `instance_prompt`: **dilbert** ## Description A DilbertDiffusion model ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('Ali-fb/dilbert-comic-model-v1-2_2k') image = pipeline().images[0] image ```
ybelkada/gpt-neo-125m-detoxified-long-context
ybelkada
2023-02-17T10:58:33Z
14
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-02-13T19:45:08Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Training logs Training logs can be found [here](https://wandb.ai/distill-bloom/trl/runs/08o87vjz?workspace=user-younesbelkada) ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="ybelkada//var/tmp/tmpxljq8o84/ybelkada/gpt-neo-125m-detoxified-long-context") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("ybelkada//var/tmp/tmpxljq8o84/ybelkada/gpt-neo-125m-detoxified-long-context") model = AutoModelForCausalLMWithValueHead.from_pretrained("ybelkada//var/tmp/tmpxljq8o84/ybelkada/gpt-neo-125m-detoxified-long-context") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
NoNameFound/ppo-LunarLander-v1
NoNameFound
2023-02-17T10:58:30Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T05:11:36Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -9.39 +/- 78.05 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 500000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'NoNameFound/ppo-LunarLander-v1' 'batch_size': 512 'minibatch_size': 128} ```
odahl/a2c-PandaReachDense-v2
odahl
2023-02-17T10:53:55Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-15T19:45:33Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.13 +/- 0.07 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Antiraedus/test-text2Live
Antiraedus
2023-02-17T10:53:45Z
0
1
null
[ "arxiv:2204.02491", "region:us" ]
null
2023-01-27T22:44:40Z
# Text2LIVE: Text-Driven Layered Image and Video Editing (ECCV 2022 - Oral) ## [<a href="https://text2live.github.io/" target="_blank">Project Page</a>] [![arXiv](https://img.shields.io/badge/arXiv-Text2LIVE-b31b1b.svg)](https://arxiv.org/abs/2204.02491) ![Pytorch](https://img.shields.io/badge/PyTorch->=1.10.0-Red?logo=pytorch) [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/weizmannscience/text2live) ![teaser](https://user-images.githubusercontent.com/22198039/179798581-ca6f6652-600a-400a-b21b-713fc5c15d56.png) **Text2LIVE** is a method for text-driven editing of real-world images and videos, as described in <a href="https://arxiv.org/abs/2204.02491" target="_blank">(link to paper)</a>. [//]: # (. It can be used for localized and global edits that change the texture of existing objects or augment the scene with semi-transparent effects &#40;e.g. smoke, fire, snow&#41;.) [//]: # (### Abstract) >We present a method for zero-shot, text-driven appearance manipulation in natural images and videos. Specifically, given an input image or video and a target text prompt, our goal is to edit the appearance of existing objects (e.g., object's texture) or augment the scene with new visual effects (e.g., smoke, fire) in a semantically meaningful manner. Our framework trains a generator using an internal dataset of training examples, extracted from a single input (image or video and target text prompt), while leveraging an external pre-trained CLIP model to establish our losses. Rather than directly generating the edited output, our key idea is to generate an edit layer (color+opacity) that is composited over the original input. This allows us to constrain the generation process and maintain high fidelity to the original input via novel text-driven losses that are applied directly to the edit layer. Our method neither relies on a pre-trained generator nor requires user-provided edit masks. Thus, it can perform localized, semantic edits on high-resolution natural images and videos across a variety of objects and scenes. ## Getting Started ### Installation ``` git clone https://github.com/omerbt/Text2LIVE.git conda create --name text2live python=3.9 conda activate text2live pip install -r requirements.txt ``` ### Download sample images and videos Download sample images and videos from the DAVIS dataset: ``` cd Text2LIVE gdown https://drive.google.com/uc?id=1osN4PlPkY9uk6pFqJZo8lhJUjTIpa80J&export=download unzip data.zip ``` It will create a folder `data`: ``` Text2LIVE ├── ... ├── data │ ├── pretrained_nla_models # NLA models are stored here │ ├── images # sample images │ └── videos # sample videos from DAVIS dataset │ ├── car-turn # contains video frames │ ├── ... └── ... ``` To enforce temporal consistency in video edits, we utilize the Neural Layered Atlases (NLA). Pretrained NLA models are taken from <a href="https://layered-neural-atlases.github.io">here</a>, and are already inside the `data` folder. ### Run examples * Our method is designed to change textures of existing objects / augment the scene with semi-transparent effects (e.g., smoke, fire). It is not designed for adding new objects or significantly deviating from the original spatial layout. * Training **Text2LIVE** multiple times with the same inputs can lead to slightly different results. * CLIP sometimes exhibits bias towards specific solutions (see figure 9 in the paper), thus slightly different text prompts may lead to different flavors of edits. The required GPU memory depends on the input image/video size, but you should be good with a Tesla V100 32GB :). Currently mixed precision introduces some instability in the training process, but it could be added later. #### Video Editing Run the following command to start training ``` python train_video.py --example_config car-turn_winter.yaml ``` #### Image Editing Run the following command to start training ``` python train_image.py --example_config golden_horse.yaml ``` Intermediate results will be saved to `results` during optimization. The frequency of saving intermediate results is indicated in the `log_images_freq` flag of the configuration. ## Sample Results https://user-images.githubusercontent.com/22198039/179797381-983e0453-2e5d-40e8-983d-578217b358e4.mov For more see the [supplementary material](https://text2live.github.io/sm/index.html). ## Citation ``` @inproceedings{bar2022text2live, title={Text2live: Text-driven layered image and video editing}, author={Bar-Tal, Omer and Ofri-Amar, Dolev and Fridman, Rafail and Kasten, Yoni and Dekel, Tali}, booktitle={European Conference on Computer Vision}, pages={707--723}, year={2022}, organization={Springer} } ```
Ahmade/bert_fine_tuned_cola
Ahmade
2023-02-17T10:27:04Z
5
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-17T09:06:37Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Ahmade/bert_fine_tuned_cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Ahmade/bert_fine_tuned_cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5945 - Validation Loss: 0.5177 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.5945 | 0.5177 | 0 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.9.0 - Tokenizers 0.13.2
google/efficientnet-b7
google
2023-02-17T10:08:23Z
3,005
11
transformers
[ "transformers", "pytorch", "efficientnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-15T23:35:01Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # EfficientNet (b7 model) EfficientNet model trained on ImageNet-1k at resolution 600x600. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks ](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras). Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/efficientnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python import torch from datasets import load_dataset from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b7") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b7") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet). ### BibTeX entry and citation info ```bibtex @article{Tan2019EfficientNetRM, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, journal={ArXiv}, year={2019}, volume={abs/1905.11946} } ```
google/efficientnet-b6
google
2023-02-17T10:08:06Z
159
0
transformers
[ "transformers", "pytorch", "efficientnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-15T23:28:54Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # EfficientNet (b6 model) EfficientNet model trained on ImageNet-1k at resolution 528x528. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks ](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras). Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/efficientnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python import torch from datasets import load_dataset from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b6") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b6") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet). ### BibTeX entry and citation info ```bibtex @article{Tan2019EfficientNetRM, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, journal={ArXiv}, year={2019}, volume={abs/1905.11946} } ```
google/efficientnet-b4
google
2023-02-17T10:06:45Z
1,722
1
transformers
[ "transformers", "pytorch", "efficientnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-15T23:21:54Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # EfficientNet (b4 model) EfficientNet model trained on ImageNet-1k at resolution 380x380. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks ](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras). Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/efficientnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python import torch from datasets import load_dataset from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b4") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b4") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet). ### BibTeX entry and citation info ```bibtex @article{Tan2019EfficientNetRM, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, journal={ArXiv}, year={2019}, volume={abs/1905.11946} } ```
google/efficientnet-b3
google
2023-02-17T10:06:26Z
200
0
transformers
[ "transformers", "pytorch", "efficientnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-15T23:18:33Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # EfficientNet (b3 model) EfficientNet model trained on ImageNet-1k at resolution 300x300. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks ](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras). Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/efficientnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python import torch from datasets import load_dataset from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b3") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b3") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet). ### BibTeX entry and citation info ```bibtex @article{Tan2019EfficientNetRM, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, journal={ArXiv}, year={2019}, volume={abs/1905.11946} } ```
google/efficientnet-b2
google
2023-02-17T10:06:07Z
255,301
0
transformers
[ "transformers", "pytorch", "efficientnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-15T22:32:36Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # EfficientNet (b2 model) EfficientNet model trained on ImageNet-1k at resolution 260x260. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks ](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras). Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/efficientnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python import torch from datasets import load_dataset from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b2") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b2") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet). ### BibTeX entry and citation info ```bibtex @article{Tan2019EfficientNetRM, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, journal={ArXiv}, year={2019}, volume={abs/1905.11946} } ```
google/efficientnet-b1
google
2023-02-17T10:05:45Z
3,499
1
transformers
[ "transformers", "pytorch", "efficientnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-15T22:30:43Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # EfficientNet (b1 model) EfficientNet model trained on ImageNet-1k at resolution 240x240. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks ](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras). Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/efficientnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python import torch from datasets import load_dataset from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b1") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b1") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet). ### BibTeX entry and citation info ```bibtex @article{Tan2019EfficientNetRM, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, journal={ArXiv}, year={2019}, volume={abs/1905.11946} } ```
google/efficientnet-b0
google
2023-02-17T10:05:19Z
16,072
8
transformers
[ "transformers", "pytorch", "efficientnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-15T20:17:27Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # EfficientNet (b0 model) EfficientNet model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks ](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras). Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/efficientnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python import torch from datasets import load_dataset from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b0") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b0") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet). ### BibTeX entry and citation info ```bibtex @article{Tan2019EfficientNetRM, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, journal={ArXiv}, year={2019}, volume={abs/1905.11946} } ```
vijayprakash/arcade-game
vijayprakash
2023-02-17T10:01:05Z
18
1
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-16T19:13:30Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: arcade-game results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9756097793579102 --- # arcade-game Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### playing card game ![ playing card game](images/_playing_card_game.jpg) #### 8 Ball Pool game ![8 Ball Pool game](images/8_Ball_Pool_game.jpg) #### Asphalt game ![Asphalt game](images/Asphalt_game.jpg) #### Bubble Shooter game ![Bubble Shooter game](images/Bubble_Shooter_game.jpg) #### Call of Duty game ![Call of Duty game](images/Call_of_Duty_game.jpg) #### Candy Crush Saga ![Candy Crush Saga](images/Candy_Crush_Saga.jpg) #### Carrom Pool: Disc Game ![Carrom Pool: Disc Game](images/Carrom_Pool:_Disc_Game.jpg) #### Clash of Clans game ![Clash of Clans game](images/Clash_of_Clans_game.jpg) #### Coin Master game ![Coin Master game](images/Coin_Master_game.jpg) #### Cricket League game ![Cricket League game](images/Cricket_League_game.jpg)
iubeda/Reinforce-CartPole-v1
iubeda
2023-02-17T09:57:19Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T09:57:09Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
egee/q-FrozenLake-v1-4x4-noSlippery
egee
2023-02-17T09:43:46Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T09:43:43Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="egee/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ashutoshmondal/autotrain-wilderv2-3544295625
ashutoshmondal
2023-02-17T09:17:10Z
18
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "autotrain", "vision", "dataset:ashutoshmondal/autotrain-data-wilderv2", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-17T09:15:27Z
--- tags: - autotrain - vision - image-classification datasets: - ashutoshmondal/autotrain-data-wilderv2 widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 2.829794634796424 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 3544295625 - CO2 Emissions (in grams): 2.8298 ## Validation Metrics - Loss: 0.159 - Accuracy: 0.940 - Precision: 0.923 - Recall: 0.960 - AUC: 0.988 - F1: 0.941
Falah/babylon
Falah
2023-02-17T09:13:14Z
2
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-17T08:22:11Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Babylon clothes are called in Arabic (الازياء البابلية) Dream Booth model trained by Falah.G.Salieh ## You can visit my blog: https://iraqprogrammer.wordpress.com/ ## FB: https://web.facebook.com/falahgs ## Email: [email protected] With Stable Diffusion, we can now create artificial intelligence art generation images using trained images. In this template, we can create images of women wearing clothes called Babylonian fashion style emanating from the Babylonian civilization in the country of Iraq, it is a popular clothing for Babylonian women in ancient times as famous images, or anything you can think of Test the concept via A1111 Colab fast-Colab-A1111 Sample images of this concept with simple and easy prompts: Any prompt and add babylon style word: Arabic beautiful woman in a costume with a long braid and a fur collar and a chain around her neck and a green , flowers, garden in the background, Bálint Kiss, promotional image, a colorized photo, antipodeans, babylon style, full shot ![0](https://huggingface.co/Falah/babylon/resolve/main/sample_images/00006-3140811858.png) ![1](https://huggingface.co/Falah/babylon/resolve/main/00005-3140811859.png) ![2](https://huggingface.co/Falah/babylon/resolve/main/00008-3140811860.png) ![3](https://huggingface.co/Falah/babylon/resolve/main/00021-1653315922.png)
WimStraetemans/ppo-LunarLander-CleanRL
WimStraetemans
2023-02-17T09:10:09Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T09:09:32Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 60.11 +/- 72.90 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 1000000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 512 'anneal_lr': True 'gae': True 'gamma': 0.999 'gae_lambda': 0.98 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Rowehn/ppo-LunarLander-CleanRL' 'batch_size': 2048 'minibatch_size': 512} ```
stevendee5/bert-finetuned-squad
stevendee5
2023-02-17T09:09:14Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-16T19:40:25Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
pyesonekyaw/recycletree_materials
pyesonekyaw
2023-02-17T09:08:36Z
0
1
fastai
[ "fastai", "image-classification", "license:openrail", "region:us" ]
image-classification
2023-02-17T08:43:20Z
--- tags: - fastai library_name: fastai pipeline_tag: image-classification license: openrail --- # RecycleTree - Materials Classification Model ![Banner](https://huggingface.co/pyesonekyaw/recycletree_plastic/resolve/main/banner.png) RecycleTree is a project from CZ3002 Advanced Software Engineering in Nanyang Technological University. It aims to enable users to have a more informed recycling experience, from finding the nearest recycling bins, to checking whether the item they wish to recycle can indeed be recycled, to learning more about recycling and contamination in general. The whole project can be found on [GitHub](https://github.com/py-sk/RecycleTree) This image classification model in particular is to classify trash items into the following material classes: * Paper * Plastic * Glass * Metal * Others ## Training Data The training dataset had around 5000 images across 5 classes, with each class having roughly the same distribution of images. The images were either scraped from Google image search, or obtained by ourselves in real life. ## Training Procedure As the purpose of this model was to act just as a proof of concept for quick prototyping of RecycleTree, I opted to use the fast.ai library and a simple model architecture of ResNet34. The training procedure is following the recommendations from [fast.ai](https://docs.fast.ai/) ## Other Models There are also other models in the RecycleTree model series: * [Plastic Classification Model](https://huggingface.co/pyesonekyaw/recycletree_plastic) - Classification of images of plastic trash into different classes * [Paper Classification Model](https://huggingface.co/pyesonekyaw/recycletree_paper) - Classification of images of paper trash into different classes * [Metal Classification Model](https://huggingface.co/pyesonekyaw/recycletree_metal) - Classification of images of metal trash into different classes * [Glass Classification Model](https://huggingface.co/pyesonekyaw/recycletree_glass) - Classification of images of glass trash into different classes * [Others Classification Model](https://huggingface.co/pyesonekyaw/recycletree_others) - Classification of images of other (not paper, metal, glass, or plastic) trash into different classes
pyesonekyaw/recycletree_paper
pyesonekyaw
2023-02-17T09:06:16Z
0
0
fastai
[ "fastai", "image-classification", "license:openrail", "region:us" ]
image-classification
2023-02-17T08:43:03Z
--- tags: - fastai library_name: fastai pipeline_tag: image-classification license: openrail --- # RecycleTree - Plastics Classification Model ![Banner](https://huggingface.co/pyesonekyaw/recycletree_plastic/resolve/main/banner.png) RecycleTree is a project from CZ3002 Advanced Software Engineering in Nanyang Technological University. It aims to enable users to have a more informed recycling experience, from finding the nearest recycling bins, to checking whether the item they wish to recycle can indeed be recycled, to learning more about recycling and contamination in general. The whole project can be found on [GitHub](https://github.com/py-sk/RecycleTree) This image classification model in particular is to classify plastic trash items into the following classes: * Beverage Carton * Cardboard * Chopsticks * Disposables * Paper Bag * Paper Packaging * Paper Product * Paper Receipt * Paper Roll * Paper Sheet * Tissue Box * Tissue Paper ## Training Data The training dataset had 9646 images across 12 classes, with each class having roughly the same distribution of images. The images were either scraped from Google image search, or obtained by ourselves in real life. ## Training Procedure As the purpose of this model was to act just as a proof of concept for quick prototyping of RecycleTree, I opted to use the fast.ai library and a simple model architecture of ResNet34. The training procedure is following the recommendations from [fast.ai](https://docs.fast.ai/) ## Other Models There are also other models in the RecycleTree model series: * [Materials Classification Model](https://huggingface.co/pyesonekyaw/recycletree_materials) - Classification of images of trash into different materials * [Plastic Classification Model](https://huggingface.co/pyesonekyaw/recycletree_plastic) - Classification of images of plastic trash into different classes * [Metal Classification Model](https://huggingface.co/pyesonekyaw/recycletree_metal) - Classification of images of metal trash into different classes * [Glass Classification Model](https://huggingface.co/pyesonekyaw/recycletree_glass) - Classification of images of glass trash into different classes * [Others Classification Model](https://huggingface.co/pyesonekyaw/recycletree_others) - Classification of images of other (not paper, metal, glass, or plastic) trash into different classes
junnyu/demo_test
junnyu
2023-02-17T09:03:30Z
0
0
null
[ "paddlepaddle", "stable-diffusion", "stable-diffusion-ppdiffusers", "text-to-image", "ppdiffusers", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-02-17T09:03:21Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-ppdiffusers - text-to-image - ppdiffusers - lora inference: false --- # LoRA DreamBooth - junnyu/demo_test These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
pyesonekyaw/recycletree_metal
pyesonekyaw
2023-02-17T09:00:15Z
0
0
fastai
[ "fastai", "image-classification", "license:openrail", "region:us" ]
image-classification
2023-02-17T08:42:36Z
--- tags: - fastai library_name: fastai pipeline_tag: image-classification license: openrail --- # RecycleTree - Metal Classification Model ![Banner](https://huggingface.co/pyesonekyaw/recycletree_plastic/resolve/main/banner.png) RecycleTree is a project from CZ3002 Advanced Software Engineering in Nanyang Technological University. It aims to enable users to have a more informed recycling experience, from finding the nearest recycling bins, to checking whether the item they wish to recycle can indeed be recycled, to learning more about recycling and contamination in general. The whole project can be found on [GitHub](https://github.com/py-sk/RecycleTree) This image classification model in particular is to classify plastic trash items into the following classes: aerosol_can', 'aluminum_tray_foil', 'metal_can_or_container * Aerosol Can * Aluminum Tray Foil * Metal Can/Container ## Training Data The training dataset had 10872 images across 3 classes, with each class having roughly the same distribution of images. The images were either scraped from Google image search, or obtained by ourselves in real life. ## Training Procedure As the purpose of this model was to act just as a proof of concept for quick prototyping of RecycleTree, I opted to use the fast.ai library and a simple model architecture of ResNet34. The training procedure is following the recommendations from [fast.ai](https://docs.fast.ai/) ## Other Models There are also other models in the RecycleTree model series: * [Materials Classification Model](https://huggingface.co/pyesonekyaw/recycletree_materials) - Classification of images of trash into different materials * [Paper Classification Model](https://huggingface.co/pyesonekyaw/recycletree_paper) - Classification of images of paper trash into different classes * [Plastic Classification Model](https://huggingface.co/pyesonekyaw/recycletree_plastic) - Classification of images of plastic trash into different classes * [Glass Classification Model](https://huggingface.co/pyesonekyaw/recycletree_glass) - Classification of images of glass trash into different classes * [Others Classification Model](https://huggingface.co/pyesonekyaw/recycletree_others) - Classification of images of other (not paper, metal, glass, or plastic) trash into different classes
ybelkada/gpt-j-6b-detoxified-24-shdl-400steps
ybelkada
2023-02-17T08:58:27Z
8
0
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-02-17T08:52:31Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for GPT-J 6B detoxified <!-- Provide a quick summary of what the model is/does. --> This model is a GPT-J 6B model that has been detoxified using RLHF. # Training details Training logs can be found [here](https://wandb.ai/distill-bloom/trl/runs/2dm41xvj?workspace=user-younesbelkada)
paulinho123/PPO-LunarLander-v2-1
paulinho123
2023-02-17T08:51:36Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T08:50:59Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -137.23 +/- 89.42 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Vibharkchauhan/distilbert-base-uncased-finetuned-imdb
Vibharkchauhan
2023-02-17T08:45:58Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-17T08:41:51Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4898 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
amjadfqs/swinv2-tiny-patch4-window8-256-finetuned-brain-tumor
amjadfqs
2023-02-17T08:43:00Z
33
1
transformers
[ "transformers", "pytorch", "tensorboard", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-16T23:55:27Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swinv2-tiny-patch4-window8-256-finetuned-brain-tumor results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9898527004909984 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-finetuned-brain-tumor This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0457 - Accuracy: 0.9899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 38 - eval_batch_size: 38 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 152 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3382 | 1.0 | 47 | 0.1669 | 0.9385 | | 0.1014 | 2.0 | 94 | 0.0901 | 0.9725 | | 0.0662 | 3.0 | 141 | 0.0457 | 0.9899 | | 0.0441 | 4.0 | 188 | 0.0484 | 0.9866 | | 0.0242 | 5.0 | 235 | 0.0469 | 0.9895 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
satoshi-2000/simp_200_bert_5_1
satoshi-2000
2023-02-17T08:40:32Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-02-17T05:25:02Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: epoch_10_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # epoch_10_1 This model is a fine-tuned version of [rinna/japanese-gpt2-medium](https://huggingface.co/rinna/japanese-gpt2-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2078 - Accuracy: 0.7527 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
AnAmbitiousMonk/ppo-LunarLander-v5
AnAmbitiousMonk
2023-02-17T08:40:07Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T08:39:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 275.42 +/- 20.49 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
imjunaidafzal/circulus-sd-photoreal-v2-6-custom
imjunaidafzal
2023-02-17T08:26:54Z
1
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-17T08:18:50Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Fine tune the ### concept name: circulus/sd-photoreal-v2.6-Custom ### Training steps: 1500 ### Text encoder steps: 350% of Training steps Sample pictures of this concept:
taraxis/melov2
taraxis
2023-02-17T07:38:29Z
1
1
diffusers
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-17T07:37:03Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: mloctst --- ### Melov2 Dreambooth model trained by taraxis with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: mloctst (use that on your prompt) ![mloctst 0](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%281%29.jpg)![mloctst 1](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%282%29.jpg)![mloctst 2](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%283%29.jpg)![mloctst 3](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%284%29.jpg)![mloctst 4](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%285%29.jpg)![mloctst 5](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%286%29.jpg)![mloctst 6](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%287%29.jpg)![mloctst 7](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%288%29.jpg)![mloctst 8](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%289%29.jpg)![mloctst 9](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2810%29.jpg)![mloctst 10](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2811%29.jpg)![mloctst 11](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2812%29.jpg)![mloctst 12](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2813%29.jpg)![mloctst 13](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2814%29.jpg)![mloctst 14](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2815%29.jpg)![mloctst 15](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2816%29.jpg)![mloctst 16](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2817%29.jpg)![mloctst 17](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2818%29.jpg)![mloctst 18](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2819%29.jpg)![mloctst 19](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2820%29.jpg)![mloctst 20](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2821%29.jpg)![mloctst 21](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2822%29.jpg)
brand25/q-FrozenLake-v1-4x4-noSlippery
brand25
2023-02-17T07:06:05Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-05T10:14:47Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="brand25/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Goddrew/ppo-LunarLander-v2-Mark2
Goddrew
2023-02-17T07:05:45Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T07:05:17Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 275.95 +/- 13.49 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dcerys/distilbert-base-uncased-finetuned-squad
dcerys
2023-02-17T07:05:23Z
21
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-17T03:27:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1511 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2103 | 1.0 | 5533 | 1.1582 | | 0.9536 | 2.0 | 11066 | 1.1241 | | 0.7529 | 3.0 | 16599 | 1.1511 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
egyee/dqn-SpaceInvadersNoFrameskip-v4-Test_1
egyee
2023-02-17T06:59:38Z
8
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T06:58:58Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 227.50 +/- 137.95 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga eryzml -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga eryzml -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga eryzml ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 150000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.00025), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
RyuExcalibur/bart-large-mnli-aitools-9n
RyuExcalibur
2023-02-17T06:59:29Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-17T06:36:10Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: bart-large-mnli-aitools-9n results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-mnli-aitools-9n This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1279 - Accuracy: 0.9778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.06 | 50 | 0.3407 | 0.9111 | | No log | 0.12 | 100 | 0.4275 | 0.9111 | | No log | 0.18 | 150 | 0.3695 | 0.9111 | | No log | 0.25 | 200 | 0.5053 | 0.9111 | | No log | 0.31 | 250 | 0.4133 | 0.9111 | | No log | 0.37 | 300 | 0.1484 | 0.9389 | | No log | 0.43 | 350 | 0.4724 | 0.9222 | | No log | 0.49 | 400 | 0.3731 | 0.9389 | | No log | 0.55 | 450 | 0.2616 | 0.95 | | 0.4158 | 0.62 | 500 | 0.3245 | 0.9444 | | 0.4158 | 0.68 | 550 | 0.1754 | 0.9611 | | 0.4158 | 0.74 | 600 | 0.2185 | 0.9611 | | 0.4158 | 0.8 | 650 | 0.1815 | 0.9667 | | 0.4158 | 0.86 | 700 | 0.1974 | 0.95 | | 0.4158 | 0.92 | 750 | 0.2370 | 0.9667 | | 0.4158 | 0.98 | 800 | 0.1629 | 0.9722 | | 0.4158 | 1.05 | 850 | 0.1581 | 0.9778 | | 0.4158 | 1.11 | 900 | 0.0895 | 0.9778 | | 0.4158 | 1.17 | 950 | 0.1237 | 0.9778 | | 0.2081 | 1.23 | 1000 | 0.1279 | 0.9778 | | 0.2081 | 1.29 | 1050 | 0.1284 | 0.9778 | | 0.2081 | 1.35 | 1100 | 0.1418 | 0.9722 | | 0.2081 | 1.41 | 1150 | 0.1998 | 0.9667 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
jmoraes/bert-finetuned-squad
jmoraes
2023-02-17T06:49:10Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-08T05:43:21Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Elaina617/anything-orangemix2
Elaina617
2023-02-17T06:48:46Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-02-17T06:42:16Z
--- license: creativeml-openrail-m ---
Gokulapriyan/swin-tiny-patch4-window7-224-finetuned-og_dataset_10e-finetuned-og_dataset_10e
Gokulapriyan
2023-02-17T06:46:35Z
36
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-17T04:47:53Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: swin-tiny-patch4-window7-224-finetuned-og_dataset_10e-finetuned-og_dataset_10e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-og_dataset_10e-finetuned-og_dataset_10e This model is a fine-tuned version of [Gokulapriyan/swin-tiny-patch4-window7-224-finetuned-og_dataset_10e](https://huggingface.co/Gokulapriyan/swin-tiny-patch4-window7-224-finetuned-og_dataset_10e) on the imagefolder dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0340 - eval_accuracy: 0.9878 - eval_runtime: 171.5097 - eval_samples_per_second: 72.002 - eval_steps_per_second: 2.251 - epoch: 4.0 - step: 2184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
trinket2023/BERTModelQA
trinket2023
2023-02-17T06:42:00Z
21
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-17T03:43:40Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: BERTModelQA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERTModelQA This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1564 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3855 | 1.0 | 2188 | 1.2194 | | 1.0469 | 2.0 | 4376 | 1.1453 | | 0.8124 | 3.0 | 6564 | 1.1564 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
YashKalyan/hsd
YashKalyan
2023-02-17T06:04:16Z
0
0
null
[ "hate speech", "vulgarity", "text-classification", "region:us" ]
text-classification
2023-02-17T06:01:16Z
--- pipeline_tag: text-classification tags: - hate speech - vulgarity ---
RyuExcalibur/bart-large-mnli-aitools-6n
RyuExcalibur
2023-02-17T05:58:42Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-17T05:34:54Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: bart-large-mnli-aitools-6n results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-mnli-aitools-6n This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2748 - Accuracy: 0.9444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.09 | 50 | 0.0885 | 0.9762 | | No log | 0.18 | 100 | 0.4805 | 0.8571 | | No log | 0.26 | 150 | 0.2582 | 0.9524 | | No log | 0.35 | 200 | 0.2742 | 0.9286 | | No log | 0.44 | 250 | 0.1553 | 0.9683 | | No log | 0.53 | 300 | 0.2574 | 0.9603 | | No log | 0.62 | 350 | 0.3690 | 0.9444 | | No log | 0.7 | 400 | 0.3113 | 0.9365 | | No log | 0.79 | 450 | 0.3474 | 0.9206 | | 0.3671 | 0.88 | 500 | 0.2385 | 0.9206 | | 0.3671 | 0.97 | 550 | 0.2947 | 0.9365 | | 0.3671 | 1.05 | 600 | 0.2834 | 0.9444 | | 0.3671 | 1.14 | 650 | 0.2425 | 0.9524 | | 0.3671 | 1.23 | 700 | 0.2494 | 0.9524 | | 0.3671 | 1.32 | 750 | 0.3040 | 0.9444 | | 0.3671 | 1.41 | 800 | 0.2974 | 0.9444 | | 0.3671 | 1.49 | 850 | 0.2268 | 0.9683 | | 0.3671 | 1.58 | 900 | 0.3889 | 0.9365 | | 0.3671 | 1.67 | 950 | 0.3333 | 0.8968 | | 0.1777 | 1.76 | 1000 | 0.2748 | 0.9444 | | 0.1777 | 1.85 | 1050 | 0.3463 | 0.9206 | | 0.1777 | 1.93 | 1100 | 0.2951 | 0.9444 | | 0.1777 | 2.02 | 1150 | 0.2726 | 0.9524 | | 0.1777 | 2.11 | 1200 | 0.3241 | 0.9444 | | 0.1777 | 2.2 | 1250 | 0.3543 | 0.9365 | | 0.1777 | 2.28 | 1300 | 0.4440 | 0.9444 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
amartyobanerjee/distilbert-base-uncased-distilled-clinc
amartyobanerjee
2023-02-17T05:42:39Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-17T05:32:25Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9487096774193549 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.3445 - Accuracy: 0.9487 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.4915 | 1.0 | 318 | 2.5863 | 0.7506 | | 1.985 | 2.0 | 636 | 1.3027 | 0.8655 | | 0.9995 | 3.0 | 954 | 0.6997 | 0.9116 | | 0.5484 | 4.0 | 1272 | 0.4723 | 0.9374 | | 0.364 | 5.0 | 1590 | 0.3997 | 0.9435 | | 0.2855 | 6.0 | 1908 | 0.3724 | 0.9439 | | 0.2475 | 7.0 | 2226 | 0.3573 | 0.9481 | | 0.2267 | 8.0 | 2544 | 0.3517 | 0.9458 | | 0.2173 | 9.0 | 2862 | 0.3480 | 0.9468 | | 0.2112 | 10.0 | 3180 | 0.3445 | 0.9487 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.0 - Datasets 1.16.1 - Tokenizers 0.10.3
Seiriryu/VToonify
Seiriryu
2023-02-17T05:37:15Z
0
1
pytorch
[ "pytorch", "style-transfer", "face-stylization", "arxiv:2209.11224", "region:us" ]
null
2023-02-17T04:52:32Z
--- library_name: pytorch tags: - style-transfer - face-stylization --- ## Model Details This system provides a web demo for the following paper: **VToonify: Controllable High-Resolution Portrait Video Style Transfer (TOG/SIGGRAPH Asia 2022)** - Developed by: Shuai Yang, Liming Jiang, Ziwei Liu and Chen Change Loy - Resources for more information: - [Project Page](https://www.mmlab-ntu.com/project/vtoonify/) - [Research Paper](https://arxiv.org/abs/2209.11224) - [GitHub Repo](https://github.com/williamyang1991/VToonify) **Abstract** > Generating high-quality artistic portrait videos is an important and desirable task in computer graphics and vision. Although a series of successful portrait image toonification models built upon the powerful StyleGAN have been proposed, these image-oriented methods have obvious limitations when applied to videos, such as the fixed frame size, the requirement of face alignment, missing non-facial details and temporal inconsistency. In this work, we investigate the challenging controllable high-resolution portrait video style transfer by introducing a novel **VToonify** framework. Specifically, VToonify leverages the mid- and high-resolution layers of StyleGAN to render high-quality artistic portraits based on the multi-scale content features extracted by an encoder to better preserve the frame details. The resulting fully convolutional architecture accepts non-aligned faces in videos of variable size as input, contributing to complete face regions with natural motions in the output. Our framework is compatible with existing StyleGAN-based image toonification models to extend them to video toonification, and inherits appealing features of these models for flexible style control on color and intensity. This work presents two instantiations of VToonify built upon Toonify and DualStyleGAN for collection-based and exemplar-based portrait video style transfer, respectively. Extensive experimental results demonstrate the effectiveness of our proposed VToonify framework over existing methods in generating high-quality and temporally-coherent artistic portrait videos with flexible style controls. ## Citation Information ```bibtex @article{yang2022Vtoonify, title={VToonify: Controllable High-Resolution Portrait Video Style Transfer}, author={Yang, Shuai and Jiang, Liming and Liu, Ziwei and Loy, Chen Change}, journal={ACM Transactions on Graphics (TOG)}, volume={41}, number={6}, articleno={203}, pages={1--15}, year={2022}, publisher={ACM New York, NY, USA}, doi={10.1145/3550454.3555437}, } ``` ## License [S-Lab License 1.0](https://github.com/williamyang1991/VToonify/blob/main/LICENSE.md)
taraxis/melov1
taraxis
2023-02-17T05:06:07Z
2
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-17T04:56:42Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: mlloctst --- ### Melov1 Dreambooth model trained by taraxis with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: mlloctst (use that on your prompt) ![mlloctst 0](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%281%29.jpg)![mlloctst 1](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%282%29.jpg)![mlloctst 2](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%283%29.jpg)![mlloctst 3](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%284%29.jpg)![mlloctst 4](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%285%29.jpg)![mlloctst 5](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%286%29.jpg)![mlloctst 6](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%287%29.jpg)![mlloctst 7](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%288%29.jpg)![mlloctst 8](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%289%29.jpg)![mlloctst 9](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2810%29.jpg)![mlloctst 10](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2811%29.jpg)![mlloctst 11](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2812%29.jpg)![mlloctst 12](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2813%29.jpg)![mlloctst 13](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2814%29.jpg)![mlloctst 14](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2815%29.jpg)![mlloctst 15](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2816%29.jpg)![mlloctst 16](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2817%29.jpg)![mlloctst 17](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2818%29.jpg)![mlloctst 18](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2819%29.jpg)![mlloctst 19](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2820%29.jpg)![mlloctst 20](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2821%29.jpg)![mlloctst 21](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2822%29.jpg)![mlloctst 22](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2823%29.jpg)![mlloctst 23](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2824%29.jpg)![mlloctst 24](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2825%29.jpg)
rocca/simvp-web
rocca
2023-02-17T04:51:16Z
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2022-12-01T05:00:27Z
--- license: apache-2.0 --- Details here: https://github.com/josephrocca/SimVP-web
ZhihongDeng/q-Taxi-v3
ZhihongDeng
2023-02-17T04:48:03Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T04:35:39Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.77 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ZhihongDeng/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
akera/whisper-tiny-lg
akera
2023-02-17T04:13:50Z
4
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_6_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-02-17T01:39:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_6_0 model-index: - name: whisper-tiny-lg results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-lg This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the common_voice_6_0 dataset. It achieves the following results on the evaluation set: - eval_loss: 2.0726 - eval_wer: 228.8298 - eval_runtime: 246.146 - eval_samples_per_second: 2.84 - eval_steps_per_second: 0.179 - epoch: 41.67 - step: 1000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 10000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.11.0 - Datasets 2.9.0 - Tokenizers 0.13.2
trinket2023/my_awesome_qa_model
trinket2023
2023-02-17T03:35:07Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-17T02:33:12Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2288 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7303 | 1.0 | 875 | 1.3847 | | 1.2529 | 2.0 | 1750 | 1.2317 | | 0.9488 | 3.0 | 2625 | 1.2288 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
xenon3134-mc/empty-eyes-LoRAs
xenon3134-mc
2023-02-17T03:27:32Z
0
17
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-02-10T05:49:44Z
--- license: creativeml-openrail-m --- # LoRAs When using these LoRAs, it may be a better image to redraw only face or eyes using inpaint. Or, it is recommended to reduce the Weight of LoRA. - [utsurome_v3.safetensors](#utsurome_v3.safetensors) - base model: [7th_anime_3.1_Cg](https://huggingface.co/syaimu/7th_test) - training dataset: [empty-eyes-dataset](https://huggingface.co/datasets/xenon3134-mc/empty-eyes-dataset/tree/main/empty_eyes) - [yorime.safetensors](#yorime.safetensors) - base model: [7th_anime_3.1_Cg](https://huggingface.co/syaimu/7th_test) - [shirome.safetensors](#shirome.safetensors) - base model: [7th_anime_3.1_Cg](https://huggingface.co/syaimu/7th_test) # utsurome_v3.safetensors [<img src="https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/utsurome_v3.png" width="512" height="768">](https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/utsurome_v3.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, 1girl, empty eyes, utsurome, maid, smile Negative prompt: (worst quality:1.4), (low quality:1.4), nsfw, Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 416625012, Size: 512x768, Model hash: 49576e83ad, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: utsurome_v5(2f0093a7aa52), AddNet Weight A 1: 0.6, AddNet Weight B 1: 0.6</pre> </details> v3 has improved image quality compared to v2. [<img src="https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/Comparison.png" width="1200" height="600">](https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/Comparison.png) # yorime.safetensors [<img src="https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/yorime.png" width="512" height="768">](https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/yorime.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, 1girl, empty eyes, yorime, smlie, maid Negative prompt: (worst quality:1.4), (low quality:1.4), (nsfw: 1.3), blush Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 541777558, Size: 512x768, Model hash: 49576e83ad, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: yorime(5924d7962886), AddNet Weight A 1: 1, AddNet Weight B 1: 1</pre> </details> </br> # shirome.safetensors [<img src="https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/shirome.png" width="512" height="768">](https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/shirome.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, 1girl, shirome, blank eyes,upper body, Negative prompt: (worst quality:1.4), (low quality:1.4), (nsfw:1.3), blush, monochrome, Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 266137625, Size: 512x768, Model hash: 49576e83ad, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: shirome(b89df5356523), AddNet Weight A 1: 1, AddNet Weight B 1: 1</pre> </details>
Fred99774/valendra
Fred99774
2023-02-17T03:25:01Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-17T03:21:50Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### valendra Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
rishabhjain16/whisper_large_v2_to_myst_cmu_pf_ot100
rishabhjain16
2023-02-17T03:23:21Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-02-15T23:14:13Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: openai/whisper-large-v2 results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_myst type: rishabhjain16/infer_myst config: en split: test metrics: - type: wer value: 11.62 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pfs type: rishabhjain16/infer_pfs config: en split: test metrics: - type: wer value: 2.84 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_cmu type: rishabhjain16/infer_cmu config: en split: test metrics: - type: wer value: 1.75 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/libritts_dev_clean type: rishabhjain16/libritts_dev_clean config: en split: test metrics: - type: wer value: 4.53 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_swedish type: rishabhjain16/infer_pf_swedish config: en split: test metrics: - type: wer value: 8.36 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_german type: rishabhjain16/infer_pf_german config: en split: test metrics: - type: wer value: 34.26 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_italian type: rishabhjain16/infer_pf_italian config: en split: test metrics: - type: wer value: 4.4 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_so_chinese type: rishabhjain16/infer_so_chinese config: en split: test metrics: - type: wer value: 14.52 name: WER --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openai/whisper-large-v2 This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1969 - Wer: 9.3970 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.6144 | 0.12 | 500 | 0.2795 | 14.0737 | | 0.1643 | 0.25 | 1000 | 0.2213 | 11.4916 | | 0.2175 | 0.38 | 1500 | 0.2009 | 10.0021 | | 0.1512 | 1.11 | 2000 | 0.1980 | 11.2632 | | 0.1527 | 1.24 | 2500 | 0.1916 | 10.8469 | | 0.0918 | 1.36 | 3000 | 0.1890 | 9.6498 | | 0.047 | 2.1 | 3500 | 0.2034 | 9.4274 | | 0.0822 | 2.23 | 4000 | 0.1969 | 9.3970 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.9.1.dev0 - Tokenizers 0.13.2
jinukoo/ppo-Huggy
jinukoo
2023-02-17T03:04:03Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-02-17T03:03:56Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: jinukoo/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
jhsign/xlm-roberta-base-finetuned-panx-ko
jhsign
2023-02-17T02:58:21Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-02-17T02:44:53Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-ko results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.ko split: validation args: PAN-X.ko metrics: - name: F1 type: f1 value: 0.8620297699594046 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-ko This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1756 - F1: 0.8620 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.346 | 1.0 | 787 | 0.2067 | 0.8033 | | 0.172 | 2.0 | 1574 | 0.1835 | 0.8382 | | 0.1082 | 3.0 | 2361 | 0.1756 | 0.8620 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
andreids/en_textcat_transport_local_out
andreids
2023-02-17T02:54:06Z
0
0
spacy
[ "spacy", "text-classification", "en", "region:us" ]
text-classification
2023-02-17T02:53:45Z
--- tags: - spacy - text-classification language: - en model-index: - name: en_textcat_transport_local_out results: [] --- | Feature | Description | | --- | --- | | **Name** | `en_textcat_transport_local_out` | | **Version** | `0.0.1` | | **spaCy** | `>=3.4.3,<3.5.0` | | **Default Pipeline** | `textcat` | | **Components** | `textcat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (2 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`textcat`** | `OTHER`, `5650 - Transport - local` | </details> ### Accuracy | Type | Score | | --- | --- | | `CATS_SCORE` | 82.63 | | `CATS_MICRO_P` | 98.69 | | `CATS_MICRO_R` | 98.69 | | `CATS_MICRO_F` | 98.69 | | `CATS_MACRO_P` | 87.11 | | `CATS_MACRO_R` | 79.15 | | `CATS_MACRO_F` | 82.63 | | `CATS_MACRO_AUC` | 87.65 | | `CATS_MACRO_AUC_PER_TYPE` | 0.00 | | `TEXTCAT_LOSS` | 107.50 |
andreids/en_textcat_subscriptions_software_out
andreids
2023-02-17T02:50:58Z
1
0
spacy
[ "spacy", "text-classification", "en", "region:us" ]
text-classification
2023-02-02T23:56:18Z
--- tags: - spacy - text-classification language: - en model-index: - name: en_textcat_subscriptions_software_out results: [] --- | Feature | Description | | --- | --- | | **Name** | `en_textcat_subscriptions_software_out` | | **Version** | `0.0.1` | | **spaCy** | `>=3.4.3,<3.5.0` | | **Default Pipeline** | `textcat` | | **Components** | `textcat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (2 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`textcat`** | `OTHER`, `5623 - Subscriptions - software` | </details> ### Accuracy | Type | Score | | --- | --- | | `CATS_SCORE` | 83.57 | | `CATS_MICRO_P` | 96.39 | | `CATS_MICRO_R` | 96.39 | | `CATS_MICRO_F` | 96.39 | | `CATS_MACRO_P` | 90.43 | | `CATS_MACRO_R` | 78.94 | | `CATS_MACRO_F` | 83.57 | | `CATS_MACRO_AUC` | 94.75 | | `CATS_MACRO_AUC_PER_TYPE` | 0.00 | | `TEXTCAT_LOSS` | 334.31 |
andreids/en_textcat_staff_amenities_out
andreids
2023-02-17T02:48:07Z
2
0
spacy
[ "spacy", "text-classification", "en", "region:us" ]
text-classification
2023-02-17T02:47:49Z
--- tags: - spacy - text-classification language: - en model-index: - name: en_textcat_staff_amenities_out results: [] --- | Feature | Description | | --- | --- | | **Name** | `en_textcat_staff_amenities_out` | | **Version** | `0.0.1` | | **spaCy** | `>=3.4.3,<3.5.0` | | **Default Pipeline** | `textcat` | | **Components** | `textcat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (2 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`textcat`** | `OTHER`, `5600 - Staff amenities & welfare expenses` | </details> ### Accuracy | Type | Score | | --- | --- | | `CATS_SCORE` | 51.78 | | `CATS_MICRO_P` | 98.18 | | `CATS_MICRO_R` | 98.18 | | `CATS_MICRO_F` | 98.18 | | `CATS_MACRO_P` | 99.09 | | `CATS_MACRO_R` | 51.15 | | `CATS_MACRO_F` | 51.78 | | `CATS_MACRO_AUC` | 71.91 | | `CATS_MACRO_AUC_PER_TYPE` | 0.00 | | `TEXTCAT_LOSS` | 196.96 |
Madhana/distilroberta-base-finetuned-wikitext2
Madhana
2023-02-17T02:32:25Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-17T02:00:53Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8359 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0852 | 1.0 | 2406 | 1.9225 | | 1.993 | 2.0 | 4812 | 1.8837 | | 1.9616 | 3.0 | 7218 | 1.8234 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
andreids/en_textcat_entertainment_expenses_out
andreids
2023-02-17T02:21:27Z
2
0
spacy
[ "spacy", "text-classification", "en", "region:us" ]
text-classification
2023-02-17T02:21:05Z
--- tags: - spacy - text-classification language: - en model-index: - name: en_textcat_entertainment_expenses_out results: [] --- | Feature | Description | | --- | --- | | **Name** | `en_textcat_entertainment_expenses_out` | | **Version** | `0.0.1` | | **spaCy** | `>=3.4.3,<3.5.0` | | **Default Pipeline** | `textcat` | | **Components** | `textcat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (2 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`textcat`** | `OTHER`, `5150 - Entertainment expenses` | </details> ### Accuracy | Type | Score | | --- | --- | | `CATS_SCORE` | 72.44 | | `CATS_MICRO_P` | 92.95 | | `CATS_MICRO_R` | 92.95 | | `CATS_MICRO_F` | 92.95 | | `CATS_MACRO_P` | 77.90 | | `CATS_MACRO_R` | 69.07 | | `CATS_MACRO_F` | 72.44 | | `CATS_MACRO_AUC` | 91.08 | | `CATS_MACRO_AUC_PER_TYPE` | 0.00 | | `TEXTCAT_LOSS` | 554.75 |
yuewu/chemical-diffusion
yuewu
2023-02-17T02:18:19Z
21
3
diffusers
[ "diffusers", "text-to-image", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-01T00:55:29Z
--- pipeline_tag: text-to-image widget: - text: "A New Family of Hybrid Perovskites Based on the Hypophosphite Ligand" --- Stable Diffusion checkpoint finetuned on JACS ToC images and titles up to 2022. The inference widget on the model page (usually to the right of this text) doesn't work very well. You can better results running the model on your own system. For simple operation try installing this Stable Diffusion UI: https://github.com/AUTOMATIC1111/stable-diffusion-webui You can then download the chemdiff.ckpt into the webui models folder and it should work directly. Samplers such as DPM++ 2S a and DPM++ SDE seem to work pretty well. Negative prompting can help improve quality - e.g. "out of frame, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts" More extensive discussion here: https://yue-here.github.io/chemicaldiffusion/
nc33/my_awesome_qa_model
nc33
2023-02-17T02:14:11Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-16T01:45:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 3.3860 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 100 | 3.6478 | | No log | 2.0 | 200 | 3.4720 | | No log | 3.0 | 300 | 3.3860 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
SayhoKim/sd-class-butterflies-32
SayhoKim
2023-02-17T01:46:52Z
2
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-02-16T02:45:39Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('SayhoKim/sd-class-butterflies-32') image = pipeline().images[0] image ```
allevelly/Analysing_socialMedia_sentiment_on_vaccines
allevelly
2023-02-17T01:45:01Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-15T14:27:29Z
--- tags: - generated_from_trainer model-index: - name: Analysing_socialMedia_sentiment_on_vaccines results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Analysing_socialMedia_sentiment_on_vaccines This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.9500 - eval_accuracy: 0.491 - eval_runtime: 63.512 - eval_samples_per_second: 31.49 - eval_steps_per_second: 3.936 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 13 ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
jinukoo/ppo-LunarLander-v2
jinukoo
2023-02-17T01:40:51Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T01:40:28Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 252.13 +/- 22.06 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
wavymulder/lomo-diffusion
wavymulder
2023-02-17T01:21:58Z
63
25
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "safetensors", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-04T19:41:30Z
--- language: - en thumbnail: "https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/images/page1.jpg" license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - safetensors - diffusers inference: true --- **Lomo Diffusion** ![Header](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/images/page1.jpg) [*CKPT DOWNLOAD LINK*](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/lomo-1.0.ckpt) - - - [*SAFETENSORS DOWNLOAD LINK*](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/lomo-1.0.safetensors) This is a dreambooth model trained on a diverse set of stylized photographs. Use the activation token **lomo style** in your prompt (I recommend at the start) This model is inspired by the Lomography movement, which embraces the imperfections and style of old LOMO cameras. The model excels at producing bright saturated colors as well as a variety of film artifacts that add to the illusion of a real photograph. When using most models, I typically use **blur haze** in my negative prompt. I encourage you to experiment and see what works well for you. Trained from 1.5 with VAE. Please see [this document where I share the parameters (prompt, sampler, seed, etc.) used for all example images.](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/paramets_for_samples.txt) You can [see here a non-cherrypicked batch of 49 images here.](https://i.imgur.com/cfIj3iq.jpg) And you can [see here a direct comparison between Analog Style and Lomo Style.](https://i.imgur.com/ugdFzPI.jpg) ![Environments Example](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/images/page2.jpg)
g8a9/roberta-tiny-8l-10M
g8a9
2023-02-17T01:15:57Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-16T22:06:15Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-tiny-8l-10M results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-tiny-8l-10M This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.3389 - Accuracy: 0.0516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 7.8102 | 1.04 | 50 | 7.3747 | 0.0514 | | 7.805 | 2.08 | 100 | 7.3699 | 0.0517 | | 7.7907 | 3.12 | 150 | 7.3595 | 0.0517 | | 7.7838 | 4.16 | 200 | 7.3617 | 0.0514 | | 7.7706 | 5.21 | 250 | 7.3586 | 0.0514 | | 7.2933 | 6.25 | 300 | 7.3566 | 0.0513 | | 7.2932 | 7.29 | 350 | 7.3527 | 0.0516 | | 7.2986 | 8.33 | 400 | 7.3561 | 0.0516 | | 7.289 | 9.37 | 450 | 7.3495 | 0.0515 | | 7.2879 | 10.41 | 500 | 7.3455 | 0.0514 | | 7.276 | 11.45 | 550 | 7.3477 | 0.0513 | | 7.3072 | 12.49 | 600 | 7.3446 | 0.0516 | | 7.2978 | 13.53 | 650 | 7.3463 | 0.0514 | | 7.2857 | 14.58 | 700 | 7.3426 | 0.0515 | | 7.2868 | 15.62 | 750 | 7.3438 | 0.0515 | | 7.2973 | 16.66 | 800 | 7.3442 | 0.0517 | | 7.2988 | 17.7 | 850 | 7.3437 | 0.0512 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0+cu113 - Datasets 2.6.1 - Tokenizers 0.12.1
Seiriryu/stable-diffusion-v1-4
Seiriryu
2023-02-17T01:15:50Z
2
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "arxiv:2207.12598", "arxiv:2112.10752", "arxiv:2103.00020", "arxiv:2205.11487", "arxiv:1910.09700", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-16T07:45:55Z
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image widget: - text: "A high tech solarpunk utopia in the Amazon rainforest" example_title: Amazon rainforest - text: "A pikachu fine dining with a view to the Eiffel Tower" example_title: Pikachu in Paris - text: "A mecha robot in a favela in expressionist style" example_title: Expressionist robot - text: "an insect robot preparing a delicious meal" example_title: Insect robot - text: "A small cabin on top of a snowy mountain in the style of Disney, artstation" example_title: Snowy disney cabin extra_gated_prompt: |- This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license extra_gated_heading: Please read the LICENSE to access this model --- # Stable Diffusion v1-4 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with 🧨Diffusers blog](https://huggingface.co/blog/stable_diffusion). The **Stable-Diffusion-v1-4** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2) checkpoint and subsequently fine-tuned on 225k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). This weights here are intended to be used with the 🧨 Diffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion. ### PyTorch ```bash pip install --upgrade diffusers transformers scipy ``` Running the pipeline with the default PNDM scheduler: ```python import torch from diffusers import StableDiffusionPipeline model_id = "CompVis/stable-diffusion-v1-4" device = "cuda" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to(device) prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` **Note**: If you are limited by GPU memory and have less than 4GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision: ```py import torch pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to(device) pipe.enable_attention_slicing() prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` To swap out the noise scheduler, pass it to `from_pretrained`: ```python from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler model_id = "CompVis/stable-diffusion-v1-4" # Use the Euler scheduler here instead scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` ### JAX/Flax To use StableDiffusion on TPUs and GPUs for faster inference you can leverage JAX/Flax. Running the pipeline with default PNDMScheduler ```python import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", revision="flax", dtype=jax.numpy.bfloat16 ) prompt = "a photo of an astronaut riding a horse on mars" prng_seed = jax.random.PRNGKey(0) num_inference_steps = 50 num_samples = jax.device_count() prompt = num_samples * [prompt] prompt_ids = pipeline.prepare_inputs(prompt) # shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, num_samples) prompt_ids = shard(prompt_ids) images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) ``` **Note**: If you are limited by TPU memory, please make sure to load the `FlaxStableDiffusionPipeline` in `bfloat16` precision instead of the default `float32` precision as done above. You can do so by telling diffusers to load the weights from "bf16" branch. ```python import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", revision="bf16", dtype=jax.numpy.bfloat16 ) prompt = "a photo of an astronaut riding a horse on mars" prng_seed = jax.random.PRNGKey(0) num_inference_steps = 50 num_samples = jax.device_count() prompt = num_samples * [prompt] prompt_ids = pipeline.prepare_inputs(prompt) # shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, num_samples) prompt_ids = shard(prompt_ids) images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) ``` # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ### Safety Module The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) **Training Procedure** Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We currently provide four checkpoints, which were trained as follows. - [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`. 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-variants-scores.jpg) Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 150000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
jonathang/Protein_Family_Models
jonathang
2023-02-17T00:44:58Z
0
0
null
[ "protein", "nlp", "cnn", "lstm", "region:us" ]
null
2023-01-27T00:34:21Z
--- tags: - protein - nlp - cnn - lstm --- Meant to be model store for https://huggingface.co/spaces/jonathang/Protein-Family-CNN Read more here: https://github.com/MLE10-Protein/Research/
dhru/best-title-fit
dhru
2023-02-17T00:33:19Z
23
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain", "summarization", "en", "dataset:dhru/autotrain-data-test-parrot", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2023-02-17T00:29:35Z
--- tags: - autotrain - summarization language: - en widget: - text: "I love AutoTrain 🤗" datasets: - dhru/autotrain-data-test-parrot co2_eq_emissions: emissions: 6.698750906046909 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 3248195543 - CO2 Emissions (in grams): 6.6988 ## Validation Metrics - Loss: 1.241 - Rouge1: 65.393 - Rouge2: 37.758 - RougeL: 51.456 - RougeLsum: 51.486 - Gen Len: 17.945 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/dhru/autotrain-test-parrot-3248195543 ```
harsh024/whisper_tn_hi
harsh024
2023-02-17T00:24:05Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-02-15T11:45:33Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: whisper_tn_hi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_tn_hi This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6400 - Wer: 147.3885 - Cer: 295.3741 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 1.6414 | 1.0 | 409 | 0.8809 | 83.9795 | 248.4138 | | 0.531 | 2.0 | 818 | 0.6346 | 75.4042 | 280.7284 | | 0.3538 | 3.0 | 1227 | 0.5810 | 69.8595 | 259.1276 | | 0.2679 | 4.0 | 1636 | 0.5639 | 84.1488 | 332.292 | | 0.2074 | 5.0 | 2045 | 0.5715 | 85.9773 | 264.435 | | 0.1599 | 6.0 | 2454 | 0.6074 | 86.481 | 262.8894 | | 0.1216 | 7.0 | 2863 | 0.6402 | 110.0398 | 285.4649 | | 0.0903 | 8.0 | 3272 | 0.6736 | 92.7961 | 278.7952 | | 0.0663 | 9.0 | 3681 | 0.7023 | 96.3896 | 268.8475 | | 0.0472 | 10.0 | 4090 | 0.7527 | 104.0041 | 276.3805 | | 0.0335 | 11.0 | 4499 | 0.7907 | 99.0646 | 274.9479 | | 0.0235 | 12.0 | 4908 | 0.8320 | 128.1004 | 282.2682 | | 0.0169 | 13.0 | 5317 | 0.8741 | 116.1305 | 277.659 | | 0.0124 | 14.0 | 5726 | 0.9090 | 137.6534 | 290.6719 | | 0.0094 | 15.0 | 6135 | 0.9492 | 117.0405 | 285.2422 | | 0.007 | 16.0 | 6544 | 0.9905 | 122.1663 | 280.3801 | | 0.0061 | 17.0 | 6953 | 1.0199 | 125.0656 | 277.5049 | | 0.0051 | 18.0 | 7362 | 1.0383 | 117.3368 | 278.8806 | | 0.0044 | 19.0 | 7771 | 1.0617 | 110.8736 | 275.7667 | | 0.0041 | 20.0 | 8180 | 1.0867 | 142.9061 | 291.1584 | | 0.0037 | 21.0 | 8589 | 1.1224 | 119.377 | 273.63 | | 0.0026 | 22.0 | 8998 | 1.1322 | 158.2452 | 295.0852 | | 0.0024 | 23.0 | 9407 | 1.1619 | 134.9446 | 283.1038 | | 0.0022 | 24.0 | 9816 | 1.1677 | 124.5789 | 283.5701 | | 0.002 | 25.0 | 10225 | 1.1898 | 125.0275 | 288.5318 | | 0.0019 | 26.0 | 10634 | 1.1994 | 138.0386 | 288.9011 | | 0.0023 | 27.0 | 11043 | 1.2216 | 119.7071 | 279.7329 | | 0.0021 | 28.0 | 11452 | 1.2521 | 96.3388 | 266.6656 | | 0.0018 | 29.0 | 11861 | 1.2568 | 148.4932 | 288.7504 | | 0.0018 | 30.0 | 12270 | 1.2541 | 115.3771 | 283.8205 | | 0.0021 | 31.0 | 12679 | 1.2291 | 98.8995 | 271.6817 | | 0.0014 | 32.0 | 13088 | 1.2821 | 130.6654 | 293.2197 | | 0.0014 | 33.0 | 13497 | 1.2804 | 121.8954 | 287.2249 | | 0.0013 | 34.0 | 13906 | 1.2802 | 137.5857 | 293.6275 | | 0.0015 | 35.0 | 14315 | 1.3010 | 147.5789 | 296.7907 | | 0.0014 | 36.0 | 14724 | 1.2945 | 139.6766 | 292.4335 | | 0.0012 | 37.0 | 15133 | 1.3310 | 144.5653 | 288.2045 | | 0.0011 | 38.0 | 15542 | 1.3200 | 160.6493 | 306.8297 | | 0.0009 | 39.0 | 15951 | 1.3394 | 211.9783 | 341.8621 | | 0.0013 | 40.0 | 16360 | 1.3367 | 133.4166 | 304.8621 | | 0.0007 | 41.0 | 16769 | 1.3472 | 154.6601 | 319.8702 | | 0.0005 | 42.0 | 17178 | 1.3617 | 149.0815 | 301.4669 | | 0.0009 | 43.0 | 17587 | 1.3570 | 163.2312 | 319.4675 | | 0.0009 | 44.0 | 17996 | 1.3723 | 149.9915 | 310.0088 | | 0.0009 | 45.0 | 18405 | 1.3809 | 133.1118 | 289.5232 | | 0.0009 | 46.0 | 18814 | 1.3664 | 166.6427 | 308.9287 | | 0.0008 | 47.0 | 19223 | 1.3894 | 150.0127 | 304.3739 | | 0.0005 | 48.0 | 19632 | 1.3632 | 129.4929 | 307.7766 | | 0.0005 | 49.0 | 20041 | 1.3917 | 143.9304 | 313.3529 | | 0.0005 | 50.0 | 20450 | 1.4006 | 113.0111 | 295.1966 | | 0.0007 | 51.0 | 20859 | 1.3966 | 158.3129 | 303.4328 | | 0.0009 | 52.0 | 21268 | 1.4149 | 138.1613 | 304.2098 | | 0.0003 | 53.0 | 21677 | 1.3998 | 163.519 | 314.8466 | | 0.0002 | 54.0 | 22086 | 1.4192 | 141.4035 | 302.2313 | | 0.0001 | 55.0 | 22495 | 1.4183 | 150.2878 | 300.9336 | | 0.0002 | 56.0 | 22904 | 1.4281 | 172.598 | 321.0298 | | 0.0018 | 57.0 | 23313 | 1.4229 | 151.9597 | 309.6211 | | 0.0009 | 58.0 | 23722 | 1.4263 | 128.9554 | 290.2265 | | 0.0003 | 59.0 | 24131 | 1.4430 | 135.6599 | 301.7223 | | 0.0002 | 60.0 | 24540 | 1.4487 | 156.1034 | 307.6167 | | 0.0004 | 61.0 | 24949 | 1.4252 | 107.7161 | 272.8312 | | 0.0001 | 62.0 | 25358 | 1.4254 | 123.5122 | 289.272 | | 0.0 | 63.0 | 25767 | 1.4510 | 121.2901 | 280.6162 | | 0.0002 | 64.0 | 26176 | 1.4407 | 111.6482 | 284.5364 | | 0.0003 | 65.0 | 26585 | 1.4512 | 123.5207 | 285.948 | | 0.0006 | 66.0 | 26994 | 1.4476 | 108.9224 | 280.1608 | | 0.0005 | 67.0 | 27403 | 1.4721 | 153.8178 | 309.4788 | | 0.0004 | 68.0 | 27812 | 1.4675 | 132.1341 | 289.9678 | | 0.0001 | 69.0 | 28221 | 1.4712 | 135.9096 | 292.8338 | | 0.0001 | 70.0 | 28630 | 1.4712 | 137.0228 | 294.8725 | | 0.0 | 71.0 | 29039 | 1.4727 | 137.9582 | 292.8438 | | 0.0 | 72.0 | 29448 | 1.4766 | 135.6514 | 291.9329 | | 0.0 | 73.0 | 29857 | 1.4808 | 135.7784 | 292.3431 | | 0.0 | 74.0 | 30266 | 1.4850 | 135.5414 | 291.5527 | | 0.0 | 75.0 | 30675 | 1.4901 | 134.3224 | 290.6803 | | 0.0 | 76.0 | 31084 | 1.4943 | 135.9562 | 291.6507 | | 0.0 | 77.0 | 31493 | 1.4986 | 136.0069 | 291.0294 | | 0.0 | 78.0 | 31902 | 1.5039 | 139.228 | 292.1162 | | 0.0 | 79.0 | 32311 | 1.5092 | 138.6862 | 291.7796 | | 0.0 | 80.0 | 32720 | 1.5146 | 139.8375 | 292.3959 | | 0.0 | 81.0 | 33129 | 1.5208 | 138.9782 | 292.097 | | 0.0 | 82.0 | 33538 | 1.5270 | 140.976 | 293.3127 | | 0.0 | 83.0 | 33947 | 1.5334 | 141.3993 | 292.2359 | | 0.0 | 84.0 | 34356 | 1.5401 | 141.2258 | 292.2309 | | 0.0 | 85.0 | 34765 | 1.5472 | 140.7686 | 291.4648 | | 0.0 | 86.0 | 35174 | 1.5550 | 140.6163 | 291.7997 | | 0.0 | 87.0 | 35583 | 1.5617 | 142.9104 | 293.0816 | | 0.0 | 88.0 | 35992 | 1.5700 | 140.9972 | 292.0618 | | 0.0 | 89.0 | 36401 | 1.5781 | 141.5559 | 292.3054 | | 0.0 | 90.0 | 36810 | 1.5855 | 142.4109 | 293.033 | | 0.0 | 91.0 | 37219 | 1.5925 | 145.0436 | 293.8586 | | 0.0 | 92.0 | 37628 | 1.6010 | 144.2648 | 293.2315 | | 0.0 | 93.0 | 38037 | 1.6083 | 144.3833 | 293.3211 | | 0.0 | 94.0 | 38446 | 1.6153 | 146.3007 | 294.4095 | | 0.0 | 95.0 | 38855 | 1.6207 | 146.9864 | 295.1798 | | 0.0 | 96.0 | 39264 | 1.6269 | 145.179 | 293.7054 | | 0.0 | 97.0 | 39673 | 1.6321 | 148.0107 | 295.6043 | | 0.0 | 98.0 | 40082 | 1.6358 | 147.088 | 295.2686 | | 0.0 | 99.0 | 40491 | 1.6389 | 148.1503 | 295.822 | | 0.0 | 100.0 | 40900 | 1.6400 | 147.3885 | 295.3741 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.11.0
BobMcDear/convmixer20_1024d_patch14_kernel9
BobMcDear
2023-02-17T00:07:19Z
0
0
null
[ "region:us" ]
null
2023-02-17T00:06:08Z
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
Elytum/tiny-classification-fast-6
Elytum
2023-02-16T23:29:20Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-16T22:06:30Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-classification-fast-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-classification-fast-6 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3593 - Accuracy: 0.9075 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3142 | 1.0 | 3202 | 0.2983 | 0.9045 | | 0.22 | 2.0 | 6404 | 0.3593 | 0.9075 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
sneh1th/bert-finetuned-squad
sneh1th
2023-02-16T23:26:38Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-16T07:29:47Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Jonnylaw/flan-t5-large
Jonnylaw
2023-02-16T23:10:01Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "unk", "dataset:Jonnylaw/autotrain-data-flan-t5-tunned", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-01-23T04:52:36Z
--- widget: - text: 'Who was the first president of United States?' tags: - text2text-generation language: - unk datasets: - Jonnylaw/autotrain-data-flan-t5-tunned co2_eq_emissions: emissions: 4.95420834932979 --- # Flan-T5 large, trained to a lot of tasks. ## Validation Metrics - Loss: 1.344 - Rouge1: 62.583 - Rouge2: 52.337 - RougeL: 59.779 - RougeLsum: 60.437 - Gen Len: 15.639 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Jonnylaw/autotrain-flan-t5-tunned-3016686642 ```