modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 18:27:25
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 18:27:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
RichardsonTXCarpetCleaning/AreaRugCleaningRichardsonTX | RichardsonTXCarpetCleaning | 2022-12-11T08:27:50Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T08:27:16Z | ---
license: other
---
Area Rug Cleaning Richardson TX
https://carpetcleaning-richardson.com/area-rug-cleaning.html
(972) 454-9815
Do you need the best cleaning services in town from Rug Shampooers?Do you want to bring back the natural beauty of your rugs after they have lost their original appearance?By simply calling our professionals, Richardson TX Carpet Cleaning will be able to properly clean them for you, leaving them looking good and brightening up your home at any time. |
RichardsonTXCarpetCleaning/CarpetStainRemovalRichardsonTX | RichardsonTXCarpetCleaning | 2022-12-11T08:26:40Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T08:25:49Z | ---
license: other
---
Carpet Stain Removal Richardson TX
https://carpetcleaning-richardson.com/carpet-stain-removal.html
(972) 454-9815
One of the reasons our carpet stain cleaning is so popular with customers is that it is eco-friendly.Our products are safe for the home, pets, and children.We are able to quickly clean tough stains that you believe are permanent and cannot be removed from your carpet.You will quickly observe the disappearance of what you thought was a stain that would not go away. |
RichardsonTXCarpetCleaning/RichardsonTXCarpetCleaning | RichardsonTXCarpetCleaning | 2022-12-11T08:25:12Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T08:24:15Z | ---
license: other
---
Richardson TX Carpet Cleaning
https://carpetcleaning-richardson.com/
(972) 454-9815
Pets are outlandish, and generally they are tomfoolery, and that is the explanation a large portion of us keep them. Notwithstanding, usually now and again they wreck in the house and right on the costly rug or carpet. A specialist from Richardson Texas Pet Stain Cleaning prescribes that it's fundamental to have the stain eliminated right away and inappropriate or lacking pet stain evacuation can set the color for all time and any further stain can harm your carpet completely or significantly more peeing can cause the scent that appears never to disappear. |
CarpetCleaningAddisonTexas/CarpetCleaningAddisonTexas | CarpetCleaningAddisonTexas | 2022-12-11T08:19:17Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T08:18:47Z | ---
license: other
---
Carpet Cleaning Addison Texas
http://carpetcleaningaddison.com/
(972) 379-7364
Private floor covering cleaners will go to your home when expected to give you various administrations, for example, cover stain expulsion, profound rug cleaning, and one end to the other rug cleaning. A few stains become long-lasting sooner or later, particularly on the off chance that it isn't treated with the fitting medication. Sometime these neglected rug stains will be left everlastingly discernibly on the floor and nobody needs to see an undesirable stain destroying the picture of your exquisite home.Sometimes picking higher standards when in doubt is the correct approach. Our medicines have been tried and evaluated #1 for the most ideal outcomes that anyone could hope to find. Cover Cleaning Addison Texas is consistently on first in class with the most recent tests and updates for all important rug medicines, we are 100 percent sure that our tried cleaning items which have set us in the number 1 position will leave with only totally fulfilled. |
luigisaetta/whisper-medium-it | luigisaetta | 2022-12-11T08:19:08Z | 18 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"whisper-event",
"it",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-08T18:00:42Z | ---
language:
- it
license: apache-2.0
tags:
- generated_from_trainer
- whisper-event
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: luigisaetta/whisper-medium-it
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 it
type: mozilla-foundation/common_voice_11_0
config: it
split: test
args: it
metrics:
- name: Wer
type: wer
value: 5.7191
---
# luigisaetta/whisper-medium-it
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1452
- Wer: 5.7191
## Model description
This model is a fine-tuning of the OpenAI Whisper Medium model, on the specified dataset.
## Intended uses & limitations
This model has been developed as part of the Hugging Face Whisper Fine Tuning sprint, December 2022.
It is meant to spread the knowledge on how these models are built and can be used to develop solutions
where it is needed ASR on the Italian Language.
It has not been extensively tested. It is possible that on other datasets the accuracy will be lower.
Please, test it before using it.
## Training and evaluation data
Trained and tested on Mozilla Common Voice, vers. 11
## Training procedure
The script **run.sh**, and the Python file, used for the training are saved in the repository.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1216 | 0.2 | 1000 | 0.2289 | 10.0594 |
| 0.1801 | 0.4 | 2000 | 0.1851 | 7.6593 |
| 0.1763 | 0.6 | 3000 | 0.1615 | 6.5258 |
| 0.1337 | 0.8 | 4000 | 0.1506 | 6.0427 |
| 0.0742 | 1.05 | 5000 | 0.1452 | 5.7191 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
GreenCarpetCleaningGarland/GreenCarpetCleaningGarland | GreenCarpetCleaningGarland | 2022-12-11T08:12:46Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T08:12:22Z | ---
license: other
---
Green Carpet Cleaning Garland
http://garlandcarpetcleaner.com/
(972) 256-8544
One of methods we follow at cover cleaning is "Steam Cleaning Administration" that depends on utilizing minimal high temp water and more steam, centering steam - which infiltrating into profound on spots and stain to dissolve every one of them even the hardest ones and kill all poisons from your rug. Then, at that point, the job of our compelling green items starts to clear this large number of components, returning your floor covering shimmered and bright. At last, we utilize our excellent dry machines, so your rug will be full dry inside no time. We have specific floor covering steam cleaners, so they know how to follow the high amazing skill simultaneously, safeguarding your rug from any harms. |
CarpetCleaningMesquiteTX/DryerVentCleaningMesquiteTX | CarpetCleaningMesquiteTX | 2022-12-11T08:01:27Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T08:01:08Z | ---
license: other
---
Dryer Vent Cleaning Mesquite TX
http://mesquitecarpetcleaningtx.com/dryer-vent-cleaning.html
(469) 213-8132
When you wash a lot each week, your dryer often works very hard to dry your clothes.It is safe to assume that your dry uses a lot of electricity in your home because it is used constantly. |
CarpetCleaningMesquiteTX/AirDuctCleaningMesquiteTX | CarpetCleaningMesquiteTX | 2022-12-11T08:00:43Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T08:00:17Z | ---
license: other
---
Air Duct Cleaning Mesquite TX
http://mesquitecarpetcleaningtx.com/air-duct-cleaning.html
(469) 213-8132
Cleaning the air ducts is very important.We ensure that your carpets, tile flooring, and rugs are kept clean and in good condition.We can deal with a variety of heater and air conditioner cleaning issues in addition to cleaning air ducts.Your air ducts can be cleaned quickly and inexpensively of dust and debris.No matter how big or small the job is, our team of certified and professionally trained technicians will complete it correctly. |
CarpetCleaningMesquiteTX/TileGroutCleaningMesquiteTX | CarpetCleaningMesquiteTX | 2022-12-11T07:59:54Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:59:32Z | ---
license: other
---
Tile Grout Cleaning Mesquite TX
http://mesquitecarpetcleaningtx.com/tile-grout-cleaning.html
(469) 213-8132
Your home is your very own castle, and you make every effort to keep it spotless and inviting at all times.However, you will discover that many tasks, including tile and grout cleaning, take up too much of your time.If you live in a house that is entirely tiled, you are aware that it is difficult to maintain the tiles' brightness and shine. |
CarpetCleaningMesquiteTX/RugCleaningMesquiteTX | CarpetCleaningMesquiteTX | 2022-12-11T07:58:08Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:57:46Z | ---
license: other
---
Rug Cleaning Mesquite TX
http://mesquitecarpetcleaningtx.com/rug-cleaning.html
(469) 213-8132
Carpet and area rug manufacturers recommend using the free hot water extraction system from Our Rug Cleaning.Carpet Cleaning Mesquite TX can also clean some area rugs at a lower temperature, depending on how many fibers they have. These rugs need to be cleaned with cool water routines.Using a high-controlled cleaning process and a deposit-free cleaning result, we remove all dirt, sand, coarseness, and grime from the area rugs. |
CarpetCleaningMesquiteTX/CarpetCleaningMesquiteTX | CarpetCleaningMesquiteTX | 2022-12-11T07:57:15Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:56:56Z | ---
license: other
---
Carpet Cleaning Mesquite TX
http://mesquitecarpetcleaningtx.com/
(469) 213-8132
The most ideal way to discard these bugs is expert and master steam cleaning with a truck mount. Cover Cleaning Mesquite TX will give you the total cleaning Administration that you expect from truly capable administrators. Our cleaners assurance to constantly give total, compelling, high audit cover administration and cleaning all over Mesquite TX and its district. We have bewildering cleaning counselors who are accessible to return to work for cleaning administrations over the course of the day nearby. |
CarpetCleaningMckinneyTX/CarpetCleaningMckinneyTX | CarpetCleaningMckinneyTX | 2022-12-11T07:53:59Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:53:36Z | ---
license: other
---
Carpet Cleaning Mckinney TX
https://carpetcleaningmckinneytx.com/
(469) 702-1202
Individuals search for elite administrations to keep their homes tidy and cutting-edge. We are certain about what we do in light of the fact that, we consolidate our long stretches of involvement in the cutting edge gear, drawing out the ideal outcome. For instance, our steam clean floor coverings technique guarantees the oil stains on your rug are for all time cleaned out with little water. Your rug will have insignificant drying time and be back on the floor quicker than expected. |
FortWorthCarpetCleaning/UpholsteryCleaningFortWorthTX | FortWorthCarpetCleaning | 2022-12-11T07:51:04Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:50:42Z | ---
license: other
---
Upholstery Cleaning Fort Worth TX
https://txfortworthcarpetcleaning.com/upholstery-cleaning.html
(817) 523-1237
When you sit on your upholstery, you inhale allergens, dirt, and dust that are trapped in its fibers.Therefore, if you want to ensure the safety of your upholstery—especially if you have children or pets—you need to hire experts in carpet cleaning for upholstery in Worth, Texas.We have the best upholstery cleaners who will come to your house and do an excellent job of cleaning it.Understanding the various fibers of your furniture is important to our technicians because it helps them choose effective and safe cleaning methods.When you hire us, we promise to give you a lot of attention and care, and we won't start cleaning your upholstery until we make sure the products we use are safe for the kind of fabric it is made of. |
FortWorthCarpetCleaning/RugCleaningFortWorthTX | FortWorthCarpetCleaning | 2022-12-11T07:49:51Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:49:30Z | ---
license: other
---
Rug Cleaning Fort Worth TX
https://txfortworthcarpetcleaning.com/rug-cleaning.html
(817) 523-1237
Carpet cleaning Fort Worth TX is nearby and able to provide you with professional cleaning services if you require an efficient and high-quality rug cleaning service.Simply contact our professionals, and your rug will regain its vibrant color and stunning appearance.We use products and equipment that enable us to provide you with the best results, such as rug shampooing, which enables us to restore your rug's beautiful appearance and the amazing scent that permeates your entire home.Call us for $20 off these services if you need them. |
GreenCarpetCleaningGrandPrairie/GreenCarpetCleaningGrandPrairie | GreenCarpetCleaningGrandPrairie | 2022-12-11T07:44:13Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:43:51Z | ---
license: other
---
Green Carpet Cleaning Grand Prairie
https://grandprairiecarpetcleaningtx.com/
(214) 301-3659
We give Floor covering Stain Expulsion that utilizes harmless to the ecosystem items. We lead the way with regards to dealing with the climate. Every one of our items are natural and are great for the environment, yet additionally for your pets and youngsters. |
seastar105/whisper-small-ko-zeroth | seastar105 | 2022-12-11T07:42:51Z | 5 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"whisper-event",
"ko",
"dataset:kresnik/zeroth_korean",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-11T00:49:45Z | ---
language:
- ko
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
- whisper-event
datasets:
- kresnik/zeroth_korean
metrics:
- wer
model-index:
- name: Whisper Small Korean
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Zeroth Korean
type: kresnik/zeroth_korean
config: clean
split: test
args: 'split: test'
metrics:
- name: Wer
type: wer
value: 6.761029965366662
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Korean
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Zeroth Korean dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0899
- Wer: 6.7610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1277 | 0.72 | 1000 | 0.1489 | 12.2271 |
| 0.0379 | 1.44 | 2000 | 0.1053 | 6.7159 |
| 0.0138 | 2.16 | 3000 | 0.0918 | 6.0382 |
| 0.0141 | 2.87 | 4000 | 0.0899 | 6.7610 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0a0+d0d6b1f
- Datasets 2.7.1
- Tokenizers 0.13.2
|
CarpetCleaningPlanoTX/DryerVentCleaningPlanoTX | CarpetCleaningPlanoTX | 2022-12-11T07:35:21Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:34:56Z | ---
license: other
---
Dryer Vent Cleaning Plano TX
https://carpetcleaningplanotx.com/dryer-vent-cleaning.html
(469) 444-1903
It's best not to do electrical work at home if you don't have the knowledge, skills, or equipment.However, you may be concerned about the reason why your relatively new drying machine takes so long to dry your clothes.This service requirement will be met by our Dryer Vent Cleaners.You should soon be enjoying a machine that moves quickly. |
CarpetCleaningPlanoTX/AirVentCleaningPlanoTX | CarpetCleaningPlanoTX | 2022-12-11T07:34:27Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:34:07Z | ---
license: other
---
Air Vent Cleaning Plano TX
https://carpetcleaningplanotx.com/air-vent-cleaning.html
(469) 444-1903
Cleaning air vents need not be difficult.Carpet Cleaning Plano in Texas is a team of experienced air vent cleaners who know how to do the job right.Professionals with certifications make up our team of technicians, who will arrive in our cutting-edge mobile cleaning units.
|
CarpetCleaningPlanoTX/AirDuctCleaningPlanoTX | CarpetCleaningPlanoTX | 2022-12-11T07:33:31Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:33:09Z | ---
license: other
---
Air Duct Cleaning Plano TX
https://carpetcleaningplanotx.com/air-duct-cleaning.html
(469) 444-1903
Airborne irritants are bad for your health, according to studies and other health research for a long time.Mold, pollen, and dust are examples.Your capacity to breathe is seriously impacted by these.Allergies and other respiratory issues are brought on by these pollutants.They may occasionally carry out attacks that can be fatal.What is the most important way to keep the air in your home, place of business, or place of business clean?It is cleaning air ducts. |
CarpetCleaningPlanoTX/UpholsteryCleaningPlanoTX | CarpetCleaningPlanoTX | 2022-12-11T07:31:41Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:31:20Z | ---
license: other
---
Upholstery Cleaning Plano TX
https://carpetcleaningplanotx.com/upholstery-cleaning.html
(469) 444-1903
We remove stains from sofas.When you have a nice, comfortable sofa in your home, spills are common.On that new couch, game day weekends can be difficult.When they are excited about who is winning on the playing field, friends, family, and pets can cause havoc.After a party, upholstery cleaning is not a problem.We can arrive with our mobile unit, which simplifies the task. |
CarpetCleaningPlanoTX/RugCleaningPlanoTX | CarpetCleaningPlanoTX | 2022-12-11T07:30:50Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:30:22Z | ---
license: other
---
Rug Cleaning Plano TX
https://carpetcleaningplanotx.com/rug-cleaning.html
(469) 444-1903
Put your carpets, rugs, and other cleaning needs at risk.Avoid immersing them in hazardous and wasteful chemical processes in particular.We use cutting-edge Green Rug Cleaners services at carpet cleaning Plano, Texas.Texas cannot match these.Rug cleaning is safe and good for the environment thanks to our cutting-edge washing technology.This will not harm your property or put your friends, family, or pets in danger. |
CarpetCleaningPlanoTX/CarpetStainRemovalPlanoTX | CarpetCleaningPlanoTX | 2022-12-11T07:29:56Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:29:29Z | ---
license: other
---
Carpet Stain Removal Plano TX
https://carpetcleaningplanotx.com/carpet-stain-removal.html
(469) 444-1903
Carpet Cleaning Plano in Texas is the company of choice for the majority of customers when it comes to stain removal.We have the best-trained staff and professional technology.We will get rid of even the worst stain.That is if it comes from your upholstery, fabrics, curtains, and carpets.Try us out today, and you'll see why the majority of people prefer us to everyone else. |
MaviBogaz/ppo-LunarLander-v2 | MaviBogaz | 2022-12-11T07:27:05Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-11T07:26:38Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.84 +/- 20.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CandyCarpetCleaningIrving/DryerVentCleaningIrvingTX | CandyCarpetCleaningIrving | 2022-12-11T07:22:36Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:21:49Z | ---
license: other
---
Dryer Vent Cleaning Irving TX
(214) 744-3341
https://carpetcleaninginirving.com/dryer-vent.html
We can assist you if you need Lint Buildup Removal in Irving, Texas.Our cleaning technicians have a lot of knowledge and experience to help you.Your washing machine won't dry your clothes as well as it used to when it had a lot of this material in it. |
CandyCarpetCleaningIrving/AirVentCleaningIrvingTX | CandyCarpetCleaningIrving | 2022-12-11T07:20:41Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:20:17Z | ---
license: other
---
Air Vent Cleaning Irving TX
https://carpetcleaninginirving.com/air-vent.html
(214) 744-3341
Our capacity to concentrate on the contentment of our clients is one of the ways that we outperform our rivals.Every time we provide services to our customers, we take the time to do it right.We plan our appointments so that our cleaners won't have to rush to serve you because there is a line of customers waiting for them. |
CandyCarpetCleaningIrving/TileGroutCleaningIrvingTX | CandyCarpetCleaningIrving | 2022-12-11T07:18:00Z | 0 | 0 | null | [
"region:us"
] | null | 2022-12-11T07:17:20Z | Tile Grout Cleaning Irving TX
license: other
https://carpetcleaninginirving.com/tile-grout.html
(214) 744-3341
We are available and can assist you at any time if you require Tile and Grout Cleaners in Irving, Texas who view this occupation as a career and make significant investments in comprehending the most effective ways to serve their customers.It's possible that the household cleaners you use are actually making your tile dirty.This includes your mop, which occasionally mixes grease, spills, and dirt with the grout. |
CandyCarpetCleaningIrving/RugCleaningIrvingTX | CandyCarpetCleaningIrving | 2022-12-11T07:15:12Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:12:39Z | ---
license: other
---
Rug Cleaning Irving TX
https://carpetcleaninginirving.com/rug.html
(214) 744-3341
We can help you with Area Rug Cleaning in Irving, Texas, if you need it.We have developed superior cleaning techniques that can bring out the beauty of this home accent, especially if it hasn't been cleaned in a while. |
CandyCarpetCleaningIrving/CandyCarpetCleaningIrving | CandyCarpetCleaningIrving | 2022-12-11T07:11:02Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:10:41Z | ---
license: other
---
Candy Carpet Cleaning Irving
https://carpetcleaninginirving.com/
(214) 744-3341
We utilize strong cleaning procedures and an exceptionally present day and high level hardware to eliminate every one of the stains from your floor covering and simultaneously shield the varieties and the fiber from any harm. We additionally use eco-accommodating cleaning items that are 100% safe for your children and pets also. Toward the finish of our cleaning cycle we will apply a defensive covering that will shield the rug from any future stains. |
muhtasham/small-mlm-imdb-target-tweet | muhtasham | 2022-12-11T07:07:25Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-11T07:03:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: small-mlm-imdb-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.7406417112299465
- name: F1
type: f1
value: 0.7432065579579084
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-imdb-target-tweet
This model is a fine-tuned version of [muhtasham/small-mlm-imdb](https://huggingface.co/muhtasham/small-mlm-imdb) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2131
- Accuracy: 0.7406
- F1: 0.7432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5821 | 4.9 | 500 | 0.8006 | 0.7540 | 0.7514 |
| 0.1013 | 9.8 | 1000 | 1.1662 | 0.7567 | 0.7562 |
| 0.0236 | 14.71 | 1500 | 1.5152 | 0.7540 | 0.7518 |
| 0.0125 | 19.61 | 2000 | 1.6963 | 0.7620 | 0.7581 |
| 0.0068 | 24.51 | 2500 | 1.9273 | 0.7380 | 0.7383 |
| 0.0042 | 29.41 | 3000 | 2.0042 | 0.7487 | 0.7500 |
| 0.0041 | 34.31 | 3500 | 2.2131 | 0.7406 | 0.7432 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Sanjay-Papaiahgari/ppo-Huggy | Sanjay-Papaiahgari | 2022-12-11T07:06:57Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-11T07:06:49Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: Sanjay-Papaiahgari/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CleaningCarpetDallas/WaterDamageRestorationDallasTX | CleaningCarpetDallas | 2022-12-11T07:05:33Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:05:13Z | ---
license: other
---
http://cleaningcarpetdallas.com/water-damage-restoration.html
(972) 643-8799
Another service you can expect from Cleaning Carpet Dallas TX is water damage restoration.Do you live in a Texas building that has been flooded by a natural disaster?Please inform our staff if you have residential or commercial architecture that has been damaged by a hurricane or flood. |
CleaningCarpetDallas/DryerVentCleaningDallasTX | CleaningCarpetDallas | 2022-12-11T07:04:43Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:04:23Z | ---
license: other
---
http://cleaningcarpetdallas.com/dryer-vent-cleaning.html
(972) 643-8799
Another skill that our Dallas technicians have mastered is cleaning dryer vents.Do you believe that the level of operation of your drying machine is lower than its normal and typical performance?Please let us know if you think there may be clogged ducts and vents so we can assist you. |
muhtasham/mini-mlm-imdb-target-tweet | muhtasham | 2022-12-11T07:03:10Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-11T07:00:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: mini-mlm-imdb-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.767379679144385
- name: F1
type: f1
value: 0.7668830990510893
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini-mlm-imdb-target-tweet
This model is a fine-tuned version of [muhtasham/mini-mlm-imdb](https://huggingface.co/muhtasham/mini-mlm-imdb) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3042
- Accuracy: 0.7674
- F1: 0.7669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8543 | 4.9 | 500 | 0.6920 | 0.7674 | 0.7571 |
| 0.3797 | 9.8 | 1000 | 0.7231 | 0.7727 | 0.7709 |
| 0.1668 | 14.71 | 1500 | 0.9171 | 0.7594 | 0.7583 |
| 0.068 | 19.61 | 2000 | 1.1558 | 0.7647 | 0.7642 |
| 0.0409 | 24.51 | 2500 | 1.3042 | 0.7674 | 0.7669 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
CleaningCarpetDallas/AirDuctCleaningDallasTX | CleaningCarpetDallas | 2022-12-11T07:02:43Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T07:02:20Z | ---
license: other
---
http://cleaningcarpetdallas.com/air-duct-cleaning.html
(972) 643-8799
For the health and safety of you and your family, hiring a mold removal service is crucial.If you don't take care of your ducts, you could end up with mold, mildew, and other harmful contaminants in them.Every time you use your air conditioner or heater, these will be moving around your house in the event of this. |
muhtasham/tiny-mlm-imdb-target-tweet | muhtasham | 2022-12-11T07:00:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-11T06:56:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: tiny-mlm-imdb-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.6925133689839572
- name: F1
type: f1
value: 0.7003562110650444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-imdb-target-tweet
This model is a fine-tuned version of [muhtasham/tiny-mlm-imdb](https://huggingface.co/muhtasham/tiny-mlm-imdb) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5550
- Accuracy: 0.6925
- F1: 0.7004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.159 | 4.9 | 500 | 0.9977 | 0.6364 | 0.6013 |
| 0.7514 | 9.8 | 1000 | 0.8549 | 0.7112 | 0.7026 |
| 0.5011 | 14.71 | 1500 | 0.8516 | 0.7032 | 0.6962 |
| 0.34 | 19.61 | 2000 | 0.9019 | 0.7059 | 0.7030 |
| 0.2258 | 24.51 | 2500 | 0.9722 | 0.7166 | 0.7164 |
| 0.1607 | 29.41 | 3000 | 1.0724 | 0.6979 | 0.6999 |
| 0.1127 | 34.31 | 3500 | 1.1435 | 0.7193 | 0.7169 |
| 0.0791 | 39.22 | 4000 | 1.2807 | 0.7059 | 0.7069 |
| 0.0568 | 44.12 | 4500 | 1.3849 | 0.7139 | 0.7159 |
| 0.0478 | 49.02 | 5000 | 1.5550 | 0.6925 | 0.7004 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
CleaningCarpetDallas/UpholsteryCleaningDallasTX | CleaningCarpetDallas | 2022-12-11T06:58:59Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2022-12-11T06:58:36Z | ---
license: other
---
http://cleaningcarpetdallas.com/upholstery-cleaning.html
(972) 643-8799
Spots and stains on your microfiber sofa, couch, or loveseat can seriously ruin the appearance of your living room.You won't stand out with your gourmet and designer rugs, grandfather clocks, and artwork, and you'll also make your friends laugh. |
sanchit-gandhi/whisper-small-fr-1k-steps | sanchit-gandhi | 2022-12-11T06:58:16Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"fr",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-11T03:28:49Z | ---
language:
- fr
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small French
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 fr
type: mozilla-foundation/common_voice_11_0
config: fr
split: test
args: fr
metrics:
- name: Wer
type: wer
value: 16.99780428461219
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small French
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 fr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3784
- Wer: 16.9978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3537 | 1.0 | 1000 | 0.3784 | 16.9978 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 2.0.0.dev20221210+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
muhtasham/base-vanilla-target-tweet | muhtasham | 2022-12-11T06:56:07Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-11T06:46:39Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: base-vanilla-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.7780748663101604
- name: F1
type: f1
value: 0.7772664883136655
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base-vanilla-target-tweet
This model is a fine-tuned version of [google/bert_uncased_L-12_H-768_A-12](https://huggingface.co/google/bert_uncased_L-12_H-768_A-12) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8380
- Accuracy: 0.7781
- F1: 0.7773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3831 | 4.9 | 500 | 0.9800 | 0.7807 | 0.7785 |
| 0.0414 | 9.8 | 1000 | 1.4175 | 0.7754 | 0.7765 |
| 0.015 | 14.71 | 1500 | 1.6411 | 0.7754 | 0.7708 |
| 0.0166 | 19.61 | 2000 | 1.5930 | 0.7941 | 0.7938 |
| 0.0175 | 24.51 | 2500 | 1.3934 | 0.7888 | 0.7852 |
| 0.0191 | 29.41 | 3000 | 1.9407 | 0.7647 | 0.7658 |
| 0.0137 | 34.31 | 3500 | 1.8380 | 0.7781 | 0.7773 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
darkvibes/vibes-v2 | darkvibes | 2022-12-11T06:40:14Z | 0 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-11T06:29:27Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### VIBES-V2 Dreambooth model trained by darkvibes with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:

|
muhtasham/mini-vanilla-target-tweet | muhtasham | 2022-12-11T06:37:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-11T06:33:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: mini-vanilla-target-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.7540106951871658
- name: F1
type: f1
value: 0.7568814825340653
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini-vanilla-target-tweet
This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5603
- Accuracy: 0.7540
- F1: 0.7569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9285 | 4.9 | 500 | 0.7493 | 0.7273 | 0.7207 |
| 0.4468 | 9.8 | 1000 | 0.7630 | 0.7460 | 0.7437 |
| 0.2194 | 14.71 | 1500 | 0.8997 | 0.7406 | 0.7455 |
| 0.1062 | 19.61 | 2000 | 1.0822 | 0.7433 | 0.7435 |
| 0.0568 | 24.51 | 2500 | 1.2225 | 0.7620 | 0.7622 |
| 0.0439 | 29.41 | 3000 | 1.3475 | 0.7513 | 0.7527 |
| 0.0304 | 34.31 | 3500 | 1.4999 | 0.7433 | 0.7399 |
| 0.0247 | 39.22 | 4000 | 1.5603 | 0.7540 | 0.7569 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/base-mlm-tweet-target-imdb | muhtasham | 2022-12-11T06:30:12Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-11T05:42:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: base-mlm-tweet-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.94368
- name: F1
type: f1
value: 0.9710240368784985
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base-mlm-tweet-target-imdb
This model is a fine-tuned version of [muhtasham/base-mlm-tweet](https://huggingface.co/muhtasham/base-mlm-tweet) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2137
- Accuracy: 0.9437
- F1: 0.9710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2617 | 0.64 | 500 | 0.1863 | 0.9342 | 0.9660 |
| 0.1778 | 1.28 | 1000 | 0.1229 | 0.9638 | 0.9816 |
| 0.1322 | 1.92 | 1500 | 0.0893 | 0.9699 | 0.9847 |
| 0.0756 | 2.56 | 2000 | 0.4449 | 0.9056 | 0.9505 |
| 0.063 | 3.2 | 2500 | 0.3961 | 0.9095 | 0.9526 |
| 0.0432 | 3.84 | 3000 | 0.2137 | 0.9437 | 0.9710 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
luigisaetta/whisper-atco2-medium | luigisaetta | 2022-12-11T06:07:13Z | 20 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:luigisaetta/atco2_normalized_augmented",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-10T19:11:52Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- luigisaetta/atco2_normalized_augmented
metrics:
- wer
model-index:
- name: whisper-atco2-medium
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: luigisaetta/atco2_normalized_augmented
type: luigisaetta/atco2_normalized_augmented
config: en
split: test
metrics:
- name: Wer
type: wer
value: 17.50524109014675
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-atco2-medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the luigisaetta/atco2_normalized_augmented dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6129
- Wer: 17.5052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 2.3939 | 1.06 | 50 | 1.8493 | 66.5618 |
| 0.5127 | 2.13 | 100 | 0.5119 | 30.6080 |
| 0.0626 | 3.19 | 150 | 0.5410 | 20.4403 |
| 0.0157 | 4.25 | 200 | 0.5775 | 19.8113 |
| 0.0107 | 5.32 | 250 | 0.5552 | 19.7065 |
| 0.0044 | 6.38 | 300 | 0.5723 | 18.1342 |
| 0.0013 | 7.45 | 350 | 0.5763 | 17.7149 |
| 0.0005 | 8.51 | 400 | 0.6053 | 17.7149 |
| 0.0004 | 9.57 | 450 | 0.6109 | 17.5052 |
| 0.0004 | 10.64 | 500 | 0.6129 | 17.5052 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.11.0
|
Farras/mt5-small-kompas | Farras | 2022-12-11T05:39:02Z | 4 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-12-11T00:11:37Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Farras/mt5-small-kompas
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Farras/mt5-small-kompas
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.4473
- Validation Loss: 7.2048
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 230, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 15.0491 | 7.8158 | 0 |
| 10.4473 | 7.2048 | 1 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.10.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
aungmyatv8/ppo-LunarLander-v2 | aungmyatv8 | 2022-12-11T05:23:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-11T05:04:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.93 +/- 21.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
odiaz1066/huggytraining | odiaz1066 | 2022-12-11T05:17:19Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-11T05:17:12Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: odiaz1066/huggytraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sagawa/ZINC-t5-v2 | sagawa | 2022-12-11T05:11:31Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"dataset:sagawa/ZINC-canonicalized",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-06T01:33:39Z | ---
license: mit
datasets:
- sagawa/ZINC-canonicalized
metrics:
- accuracy
model-index:
- name: ZINC-deberta
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: sagawa/ZINC-canonicalized
type: sagawa/ZINC-canonicalized
metrics:
- name: Accuracy
type: accuracy
value: 0.9475839734077454
---
# ZINC-t5
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/microsoft/deberta-base) on the sagawa/ZINC-canonicalized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1228
- Accuracy: 0.9476
## Model description
We trained t5 on SMILES from ZINC using the task of masked-language modeling (MLM). Compared to ZINC-t5, ZINC-t5-v2 uses a character-level tokenizer, and it was also trained on ZINC.
## Intended uses & limitations
This model can be used for the prediction of molecules' properties, reactions, or interactions with proteins by changing the way of finetuning.
As an example, We finetuned this model to predict products. The model is [here](https://huggingface.co/sagawa/ZINC-t5-productpredicition), and you can use the demo [here](https://huggingface.co/spaces/sagawa/predictproduct-t5).
Using its encoder, we trained a regression model to predict a reaction yield. You can use this demo [here](https://huggingface.co/spaces/sagawa/predictyield-t5).
## Training and evaluation data
We downloaded [ZINC data](https://drive.google.com/drive/folders/1lSPCqh31zxTVEhuiPde7W3rZG8kPgp-z) and canonicalized them using RDKit. Then, we dropped duplicates. The total number of data is 22992522, and they were randomly split into train:validation=10:1.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-03
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Step | Accuracy | Validation Loss |
|:-------------:|:------:|:--------:|:---------------:|
| 0.2090 | 100000 | 0.9264 | 0.1860 |
| 0.1628 | 200000 | 0.9349 | 0.1613 |
| 0.1632 | 300000 | 0.9395 | 0.1467 |
| 0.1451 | 400000 | 0.9435 | 0.1345 |
| 0.1311 | 500000 | 0.9465 | 0.1261 | |
muhtasham/small-mlm-tweet-target-imdb | muhtasham | 2022-12-11T05:07:45Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-11T04:57:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: small-mlm-tweet-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.88784
- name: F1
type: f1
value: 0.9405881854394441
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-tweet-target-imdb
This model is a fine-tuned version of [muhtasham/small-mlm-tweet](https://huggingface.co/muhtasham/small-mlm-tweet) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4422
- Accuracy: 0.8878
- F1: 0.9406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3515 | 0.64 | 500 | 0.1494 | 0.9388 | 0.9684 |
| 0.2452 | 1.28 | 1000 | 0.1439 | 0.9450 | 0.9717 |
| 0.1956 | 1.92 | 1500 | 0.2199 | 0.9156 | 0.9559 |
| 0.1398 | 2.56 | 2000 | 0.4328 | 0.876 | 0.9339 |
| 0.1102 | 3.2 | 2500 | 0.4422 | 0.8878 | 0.9406 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/tiny-mlm-tweet-target-imdb | muhtasham | 2022-12-11T04:49:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-11T04:42:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: tiny-mlm-tweet-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.84864
- name: F1
type: f1
value: 0.9181235935606715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-tweet-target-imdb
This model is a fine-tuned version of [muhtasham/tiny-mlm-tweet](https://huggingface.co/muhtasham/tiny-mlm-tweet) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4017
- Accuracy: 0.8486
- F1: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5661 | 0.64 | 500 | 0.3869 | 0.8363 | 0.9109 |
| 0.3798 | 1.28 | 1000 | 0.3730 | 0.8390 | 0.9125 |
| 0.3283 | 1.92 | 1500 | 0.2422 | 0.9018 | 0.9484 |
| 0.2926 | 2.56 | 2000 | 0.4156 | 0.8210 | 0.9017 |
| 0.2713 | 3.2 | 2500 | 0.3951 | 0.8405 | 0.9133 |
| 0.2519 | 3.84 | 3000 | 0.2170 | 0.9118 | 0.9539 |
| 0.2329 | 4.48 | 3500 | 0.4214 | 0.8357 | 0.9105 |
| 0.2074 | 5.12 | 4000 | 0.5114 | 0.8032 | 0.8909 |
| 0.1898 | 5.75 | 4500 | 0.4017 | 0.8486 | 0.9181 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ahmed02/Stable-diffusion-1-4 | ahmed02 | 2022-12-11T04:41:27Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-12-11T04:41:27Z | ---
license: bigscience-openrail-m
---
|
sagawa/PubChem-10m-deberta | sagawa | 2022-12-11T04:33:58Z | 55 | 1 | transformers | [
"transformers",
"pytorch",
"deberta",
"fill-mask",
"generated_from_trainer",
"dataset:sagawa/pubchem-10m-canonicalized",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-05T07:12:42Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- sagawa/pubchem-10m-canonicalized
metrics:
- accuracy
model-index:
- name: PubChem-10m-deberta
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: sagawa/pubchem-10m-canonicalized
type: sagawa/pubchem-10m-canonicalized
metrics:
- name: Accuracy
type: accuracy
value: 0.9741235263046233
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PubChem10m-deberta-base-output
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the sagawa/pubchem-10m-canonicalized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0698
- Accuracy: 0.9741
## Model description
We trained deberta-base on SMILES from PubChem using the task of masked-language modeling (MLM). Its tokenizer is a character-level tokenizer trained on PubChem.
## Intended uses & limitations
This model can be used for the prediction of molecules' properties, reactions, or interactions with proteins by changing the way of finetuning.
## Training and evaluation data
We downloaded [PubChem data](https://drive.google.com/file/d/1ygYs8dy1-vxD1Vx6Ux7ftrXwZctFjpV3/view) and canonicalized them using RDKit. Then, we dropped duplicates. The total number of data is 9999960, and they were randomly split into train:validation=10:1.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.0855 | 3.68 | 100000 | 0.0801 | 0.9708 |
| 0.0733 | 7.37 | 200000 | 0.0702 | 0.9740 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.0
- Datasets 2.4.1.dev0
- Tokenizers 0.11.6
|
JuandaBula/distilroberta-base-mrpc-glue-juanda-bula | JuandaBula | 2022-12-11T04:29:55Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-11T03:10:58Z | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text:
- >-
Yucaipa owned Dominick 's before selling the chain to Safeway in 1998
for $ 2.5 billion.
- >-
Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to
Safeway for $ 1.8 billion in 1998.
example_title: Not Equivalent
- text:
- >-
Revenue in the first quarter of the year dropped 15 percent from the
same period a year earlier.
- >-
With the scandal hanging over Stewart's company revenue the first
quarter of the year dropped 15 percent from the same period a year
earlier.
example_title: Equivalent
model-index:
- name: distilroberta-base-mrpc-glue-juanda-bula
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: datasetX
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8333333333333334
- name: F1
type: f1
value: 0.870722433460076
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue-juanda-bula
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5684
- Accuracy: 0.8333
- F1: 0.8707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5239 | 1.09 | 500 | 0.6723 | 0.7990 | 0.8610 |
| 0.3692 | 2.18 | 1000 | 0.5684 | 0.8333 | 0.8707 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cpu
- Datasets 2.7.1
- Tokenizers 0.13.2
|
redevaaa/fin3 | redevaaa | 2022-12-11T03:59:45Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:fin",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-11T03:32:16Z | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- fin
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: fin3
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: fin
type: fin
config: default
split: train
args: default
metrics:
- name: Precision
type: precision
value: 0.944
- name: Recall
type: recall
value: 0.9402390438247012
- name: F1
type: f1
value: 0.9421157684630739
- name: Accuracy
type: accuracy
value: 0.9921209540034072
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fin3
This model is a fine-tuned version of [nlpaueb/sec-bert-base](https://huggingface.co/nlpaueb/sec-bert-base) on the fin dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0748
- Precision: 0.944
- Recall: 0.9402
- F1: 0.9421
- Accuracy: 0.9921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 129 | 0.0669 | 0.8821 | 0.9243 | 0.9027 | 0.9883 |
| No log | 2.0 | 258 | 0.0568 | 0.9289 | 0.9363 | 0.9325 | 0.9913 |
| No log | 3.0 | 387 | 0.0565 | 0.9141 | 0.9323 | 0.9231 | 0.9904 |
| 0.0556 | 4.0 | 516 | 0.0617 | 0.9237 | 0.9163 | 0.92 | 0.9904 |
| 0.0556 | 5.0 | 645 | 0.0658 | 0.9243 | 0.9243 | 0.9243 | 0.9904 |
| 0.0556 | 6.0 | 774 | 0.0695 | 0.944 | 0.9402 | 0.9421 | 0.9921 |
| 0.0556 | 7.0 | 903 | 0.0731 | 0.932 | 0.9283 | 0.9301 | 0.9917 |
| 0.0016 | 8.0 | 1032 | 0.0750 | 0.9283 | 0.9283 | 0.9283 | 0.9917 |
| 0.0016 | 9.0 | 1161 | 0.0737 | 0.944 | 0.9402 | 0.9421 | 0.9921 |
| 0.0016 | 10.0 | 1290 | 0.0748 | 0.944 | 0.9402 | 0.9421 | 0.9921 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/small-mlm-imdb-target-imdb | muhtasham | 2022-12-11T03:43:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-11T03:31:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: small-mlm-imdb-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.91736
- name: F1
type: f1
value: 0.9568990695539701
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-imdb-target-imdb
This model is a fine-tuned version of [muhtasham/small-mlm-imdb](https://huggingface.co/muhtasham/small-mlm-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3145
- Accuracy: 0.9174
- F1: 0.9569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.315 | 0.64 | 500 | 0.1711 | 0.9310 | 0.9642 |
| 0.2248 | 1.28 | 1000 | 0.1385 | 0.9471 | 0.9728 |
| 0.1824 | 1.92 | 1500 | 0.1044 | 0.9610 | 0.9801 |
| 0.1326 | 2.56 | 2000 | 0.2382 | 0.9294 | 0.9634 |
| 0.1056 | 3.2 | 2500 | 0.5074 | 0.8698 | 0.9304 |
| 0.0804 | 3.84 | 3000 | 0.3145 | 0.9174 | 0.9569 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/tiny-mlm-imdb-target-imdb | muhtasham | 2022-12-11T03:22:48Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-11T03:18:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: tiny-mlm-imdb-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.88952
- name: F1
type: f1
value: 0.9415301240526694
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-imdb-target-imdb
This model is a fine-tuned version of [muhtasham/tiny-mlm-imdb](https://huggingface.co/muhtasham/tiny-mlm-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2699
- Accuracy: 0.8895
- F1: 0.9415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5432 | 0.64 | 500 | 0.3567 | 0.8578 | 0.9235 |
| 0.366 | 1.28 | 1000 | 0.3687 | 0.8414 | 0.9138 |
| 0.32 | 1.92 | 1500 | 0.2648 | 0.8922 | 0.9430 |
| 0.2868 | 2.56 | 2000 | 0.3868 | 0.8314 | 0.9079 |
| 0.2671 | 3.2 | 2500 | 0.3092 | 0.8774 | 0.9347 |
| 0.248 | 3.84 | 3000 | 0.2699 | 0.8895 | 0.9415 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
sebrosen8/rose-shield-model | sebrosen8 | 2022-12-11T03:22:35Z | 4 | 2 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-11T03:20:52Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: dreamroseshield
---
### Rose Shield model Dreambooth model trained by sebrosen8 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the None base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
dreamroseshield (use that on your prompt)

|
rymaju/KB13-t5-small-finetuned-en-to-regex | rymaju | 2022-12-11T02:43:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-12-05T03:14:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: KB13-t5-small-finetuned-en-to-regex
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KB13-t5-small-finetuned-en-to-regex
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4028
- Semantic accuracy: 0.439
- Syntactic accuracy: 0.3659
- Gen Len: 15.3659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Semantic accuracy | Syntactic accuracy | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:------------------:|:-------:|
| No log | 1.0 | 47 | 0.9241 | 0.0488 | 0.0488 | 15.1951 |
| No log | 2.0 | 94 | 0.6326 | 0.3171 | 0.2683 | 14.6341 |
| No log | 3.0 | 141 | 0.5936 | 0.2927 | 0.2683 | 15.1463 |
| No log | 4.0 | 188 | 0.5097 | 0.3415 | 0.3171 | 15.5854 |
| No log | 5.0 | 235 | 0.4467 | 0.3659 | 0.3171 | 15.7073 |
| No log | 6.0 | 282 | 0.3875 | 0.3659 | 0.3415 | 15.4146 |
| No log | 7.0 | 329 | 0.4208 | 0.3659 | 0.3171 | 15.5122 |
| No log | 8.0 | 376 | 0.3551 | 0.3659 | 0.3171 | 15.3659 |
| No log | 9.0 | 423 | 0.2996 | 0.3659 | 0.3171 | 15.3659 |
| No log | 10.0 | 470 | 0.3571 | 0.3902 | 0.3171 | 15.2195 |
| 0.7453 | 11.0 | 517 | 0.3316 | 0.4146 | 0.3415 | 15.3659 |
| 0.7453 | 12.0 | 564 | 0.3371 | 0.4146 | 0.3415 | 15.439 |
| 0.7453 | 13.0 | 611 | 0.3488 | 0.4146 | 0.3415 | 15.439 |
| 0.7453 | 14.0 | 658 | 0.3069 | 0.439 | 0.3659 | 15.4146 |
| 0.7453 | 15.0 | 705 | 0.3289 | 0.439 | 0.3659 | 15.1951 |
| 0.7453 | 16.0 | 752 | 0.3420 | 0.3902 | 0.3171 | 15.0976 |
| 0.7453 | 17.0 | 799 | 0.3190 | 0.4146 | 0.3415 | 15.1463 |
| 0.7453 | 18.0 | 846 | 0.3495 | 0.439 | 0.3659 | 15.1463 |
| 0.7453 | 19.0 | 893 | 0.3588 | 0.439 | 0.3659 | 15.3659 |
| 0.7453 | 20.0 | 940 | 0.3457 | 0.439 | 0.3659 | 15.3659 |
| 0.7453 | 21.0 | 987 | 0.3662 | 0.439 | 0.3659 | 15.3659 |
| 0.1294 | 22.0 | 1034 | 0.3533 | 0.439 | 0.3659 | 15.3659 |
| 0.1294 | 23.0 | 1081 | 0.3872 | 0.4146 | 0.3415 | 15.4146 |
| 0.1294 | 24.0 | 1128 | 0.3902 | 0.4146 | 0.3415 | 15.3659 |
| 0.1294 | 25.0 | 1175 | 0.3802 | 0.439 | 0.3659 | 15.3659 |
| 0.1294 | 26.0 | 1222 | 0.3893 | 0.439 | 0.3659 | 15.4146 |
| 0.1294 | 27.0 | 1269 | 0.4035 | 0.4146 | 0.3415 | 15.1951 |
| 0.1294 | 28.0 | 1316 | 0.4020 | 0.4146 | 0.3415 | 15.3659 |
| 0.1294 | 29.0 | 1363 | 0.3983 | 0.439 | 0.3659 | 15.3659 |
| 0.1294 | 30.0 | 1410 | 0.4028 | 0.439 | 0.3659 | 15.3659 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/medium-vanilla-target-imdb | muhtasham | 2022-12-11T02:36:24Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-11T02:20:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: medium-vanilla-target-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8964
- name: F1
type: f1
value: 0.945370175068551
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medium-vanilla-target-imdb
This model is a fine-tuned version of [google/bert_uncased_L-8_H-512_A-8](https://huggingface.co/google/bert_uncased_L-8_H-512_A-8) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4330
- Accuracy: 0.8964
- F1: 0.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3068 | 0.64 | 500 | 0.2373 | 0.9061 | 0.9507 |
| 0.2143 | 1.28 | 1000 | 0.1204 | 0.9534 | 0.9761 |
| 0.1655 | 1.92 | 1500 | 0.1557 | 0.942 | 0.9701 |
| 0.1107 | 2.56 | 2000 | 0.2791 | 0.9268 | 0.9620 |
| 0.0905 | 3.2 | 2500 | 0.4330 | 0.8964 | 0.9454 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ScrappyCoco666/ppo-LunarLander-v2-5 | ScrappyCoco666 | 2022-12-11T02:14:08Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-11T02:13:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 302.61 +/- 18.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
redevaaa/fin1 | redevaaa | 2022-12-11T02:12:04Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:fin",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-11T01:38:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- fin
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: fin1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: fin
type: fin
config: default
split: train
args: default
metrics:
- name: Precision
type: precision
value: 0.8315412186379928
- name: Recall
type: recall
value: 0.9243027888446215
- name: F1
type: f1
value: 0.8754716981132076
- name: Accuracy
type: accuracy
value: 0.985175455057234
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fin1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the fin dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0778
- Precision: 0.8315
- Recall: 0.9243
- F1: 0.8755
- Accuracy: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 129 | 0.0860 | 0.8535 | 0.9283 | 0.8893 | 0.9904 |
| No log | 2.0 | 258 | 0.1513 | 0.7993 | 0.9203 | 0.8556 | 0.9799 |
| No log | 3.0 | 387 | 0.0977 | 0.8221 | 0.9203 | 0.8684 | 0.9831 |
| 0.0017 | 4.0 | 516 | 0.0783 | 0.8286 | 0.9243 | 0.8738 | 0.9848 |
| 0.0017 | 5.0 | 645 | 0.0778 | 0.8315 | 0.9243 | 0.8755 | 0.9852 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
sd-concepts-library/pokemon-rgby-sprite | sd-concepts-library | 2022-12-11T02:10:06Z | 0 | 7 | null | [
"license:mit",
"region:us"
] | null | 2022-12-11T02:02:35Z | ---
license: mit
---
### Pokemon RGBY sprite on Stable Diffusion
Pokémon Red/Green/Blue/Yellow battle sprite concept (GameBoy 56x56 upscaled to 512x512)
This is the `<pkmn-rgby>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





































































































































































































































































































































































































































































|
ScrappyCoco666/ppo-LunarLander-v2-2 | ScrappyCoco666 | 2022-12-11T01:46:52Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-11T01:46:24Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 289.76 +/- 15.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ScrappyCoco666/ppo-LunarLander-v2-3 | ScrappyCoco666 | 2022-12-11T01:38:52Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-11T01:38:31Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 292.80 +/- 19.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
panopstor/finetunedump | panopstor | 2022-12-11T01:37:20Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-12-11T01:37:20Z | ---
license: creativeml-openrail-m
---
|
eublefar/bigbird-dialogue-score | eublefar | 2022-12-11T01:18:15Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"big_bird",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-10T13:26:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bigbird-dialogue-score
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-dialogue-score
This model is a fine-tuned version of [google/bigbird-roberta-large](https://huggingface.co/google/bigbird-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2129
- eval_f1: 0.9290
- eval_precision: 0.9173
- eval_recall: 0.9410
- eval_runtime: 311.0516
- eval_samples_per_second: 49.304
- eval_steps_per_second: 6.163
- epoch: 1.0
- step: 5432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 6
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
anuragshas/whisper-small-ur | anuragshas | 2022-12-11T00:37:51Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ur",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-10T19:59:32Z | ---
language:
- ur
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Urdu
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 ur
type: mozilla-foundation/common_voice_11_0
config: ur
split: test
args: ur
metrics:
- name: Wer
type: wer
value: 32.68135868933731
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Urdu
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 ur dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7803
- Wer: 32.6814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2634 | 3.85 | 200 | 0.5562 | 43.3518 |
| 0.0592 | 7.69 | 400 | 0.6271 | 40.8807 |
| 0.0121 | 11.54 | 600 | 0.7298 | 35.4506 |
| 0.0048 | 15.38 | 800 | 0.7803 | 32.6814 |
| 0.0039 | 19.23 | 1000 | 0.7940 | 33.3243 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
zates/distilbert-base-uncased-finetuned-squad-seed-420 | zates | 2022-12-11T00:20:35Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-10T21:34:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad-seed-420
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-seed-420
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4491 | 1.0 | 8248 | 2.1014 |
| 2.1388 | 2.0 | 16496 | 1.9590 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/small-mlm-imdb | muhtasham | 2022-12-10T23:57:28Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-12-10T23:17:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: small-mlm-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-imdb
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7542 | 0.16 | 500 | 2.5445 |
| 2.6734 | 0.32 | 1000 | 2.5191 |
| 2.6552 | 0.48 | 1500 | 2.4976 |
| 2.6481 | 0.64 | 2000 | 2.4866 |
| 2.6291 | 0.8 | 2500 | 2.4599 |
| 2.6134 | 0.96 | 3000 | 2.4585 |
| 2.5627 | 1.12 | 3500 | 2.4476 |
| 2.5564 | 1.28 | 4000 | 2.4340 |
| 2.5493 | 1.44 | 4500 | 2.4354 |
| 2.5435 | 1.6 | 5000 | 2.4307 |
| 2.5352 | 1.76 | 5500 | 2.4224 |
| 2.5445 | 1.92 | 6000 | 2.4167 |
| 2.5191 | 2.08 | 6500 | 2.4175 |
| 2.5143 | 2.24 | 7000 | 2.4149 |
| 2.5059 | 2.4 | 7500 | 2.4117 |
| 2.4865 | 2.56 | 8000 | 2.4063 |
| 2.5113 | 2.72 | 8500 | 2.3976 |
| 2.5115 | 2.88 | 9000 | 2.3959 |
| 2.485 | 3.04 | 9500 | 2.3917 |
| 2.4652 | 3.2 | 10000 | 2.3908 |
| 2.4569 | 3.36 | 10500 | 2.3877 |
| 2.4706 | 3.52 | 11000 | 2.3836 |
| 2.4375 | 3.68 | 11500 | 2.3870 |
| 2.4556 | 3.84 | 12000 | 2.3819 |
| 2.4487 | 4.0 | 12500 | 2.3842 |
| 2.4233 | 4.16 | 13000 | 2.3731 |
| 2.4238 | 4.32 | 13500 | 2.3801 |
| 2.4051 | 4.48 | 14000 | 2.3809 |
| 2.432 | 4.64 | 14500 | 2.3641 |
| 2.428 | 4.8 | 15000 | 2.3686 |
| 2.4248 | 4.96 | 15500 | 2.3741 |
| 2.4109 | 5.12 | 16000 | 2.3673 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Michunie/ppo-LunarLander-v2 | Michunie | 2022-12-10T23:39:04Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-10T19:31:41Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 284.30 +/- 17.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Vasi001/whisper-small | Vasi001 | 2022-12-10T23:32:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-10T21:57:53Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Hi - Swedish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Swedish
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/medium-mlm-tweet | muhtasham | 2022-12-10T23:13:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-12-10T22:56:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: medium-mlm-tweet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medium-mlm-tweet
This model is a fine-tuned version of [google/bert_uncased_L-8_H-512_A-8](https://huggingface.co/google/bert_uncased_L-8_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1681 | 11.11 | 500 | 3.2485 |
| 2.6193 | 22.22 | 1000 | 3.2971 |
| 2.286 | 33.33 | 1500 | 3.5000 |
| 1.9916 | 44.44 | 2000 | 3.3983 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/small-mlm-tweet | muhtasham | 2022-12-10T22:55:44Z | 3 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-12-10T22:41:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: small-mlm-tweet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-tweet
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4028 | 11.11 | 500 | 3.4323 |
| 2.8952 | 22.22 | 1000 | 3.4180 |
| 2.6035 | 33.33 | 1500 | 3.6851 |
| 2.3349 | 44.44 | 2000 | 3.4708 |
| 2.1048 | 55.56 | 2500 | 3.8171 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
sanchit-gandhi/whisper-small-en-1k-steps | sanchit-gandhi | 2022-12-10T22:41:09Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-10T18:20:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: openai/whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: en
split: test
args: en
metrics:
- name: Wer
type: wer
value: 14.805770651929443
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3747
- Wer: 14.8058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2803 | 1.0 | 1000 | 0.3747 | 14.8058 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 2.0.0.dev20221210+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
muhtasham/mini-mlm-tweet | muhtasham | 2022-12-10T22:41:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-12-10T22:31:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mini-mlm-tweet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini-mlm-tweet
This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9227 | 11.11 | 500 | 3.8377 |
| 3.4825 | 22.22 | 1000 | 3.7411 |
| 3.2903 | 33.33 | 1500 | 3.8864 |
| 3.1026 | 44.44 | 2000 | 3.6987 |
| 2.9438 | 55.56 | 2500 | 3.9807 |
| 2.8075 | 66.67 | 3000 | 3.8835 |
| 2.6951 | 77.78 | 3500 | 4.1171 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
alanrice/wav2vec2-large-xls-r-1b-irish-colab | alanrice | 2022-12-10T22:39:04Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"ga",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-10T10:21:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
language:
- ga
model-index:
- name: wav2vec2-large-xls-r-1b-irish-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: ga-IE
split: train+validation
args: ga-IE
metrics:
- name: Wer
type: wer
value: 46.911764705882353
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-irish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0795
- Wer: 46.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.6902 | 12.12 | 400 | 1.1158 | 0.5959 |
| 0.2988 | 24.24 | 800 | 1.1375 | 0.5094 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.10.0+cu113
- Datasets 2.0.0
- Tokenizers 0.13.2
|
alanrice/wav2vec2-large-xls-r-300m-irish-colab | alanrice | 2022-12-10T22:38:40Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"ga",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-08T22:35:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
language:
- ga
model-index:
- name: wav2vec2-large-xls-r-300m-irish-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: ga-IE
split: train+validation
args: ga-IE
metrics:
- name: Wer
type: wer
value: 52.44117647058824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-irish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.148
- Wer: 52.4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.6516 | 12.12 | 400 | 1.2867 | 0.7653 |
| 0.4188 | 24.24 | 800 | 1.1262 | 0.5509 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.10.0+cu113
- Datasets 2.0.0
- Tokenizers 0.13.2
|
Leilab/hair_lenght | Leilab | 2022-12-10T22:31:40Z | 28 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-10T22:31:29Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: hair_lenght
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8888888955116272
---
# hair_lenght
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### long hair

#### short human hair
 |
osanseviero/q-Taxi-v3-nice | osanseviero | 2022-12-10T22:16:59Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-10T22:16:53Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-nice
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="osanseviero/q-Taxi-v3-nice", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fcakyon/timesformer-large-finetuned-ssv2 | fcakyon | 2022-12-10T22:16:57Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"timesformer",
"video-classification",
"vision",
"arxiv:2102.05095",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2022-12-10T21:37:16Z | ---
license: "cc-by-nc-4.0"
tags:
- vision
- video-classification
---
# TimeSformer (large-sized model, fine-tuned on Something Something v2)
TimeSformer model pre-trained on [Something Something v2](https://developer.qualcomm.com/software/ai-datasets/something-something). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer).
Disclaimer: The team releasing TimeSformer did not write a model card for this model so this model card has been written by [fcakyon](https://github.com/fcakyon).
## Intended uses & limitations
You can use the raw model for video classification into one of the 174 possible Something Something v2 labels.
### How to use
Here is how to use this model to classify a video:
```python
from transformers import AutoImageProcessor, TimesformerForVideoClassification
import numpy as np
import torch
video = list(np.random.randn(64, 3, 448, 448))
processor = AutoImageProcessor.from_pretrained("fcakyon/timesformer-large-finetuned-ssv2")
model = TimesformerForVideoClassification.from_pretrained("fcakyon/timesformer-large-finetuned-ssv2")
inputs = feature_extractor(images=video, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/timesformer.html#).
### BibTeX entry and citation info
```bibtex
@inproceedings{bertasius2021space,
title={Is Space-Time Attention All You Need for Video Understanding?},
author={Bertasius, Gedas and Wang, Heng and Torresani, Lorenzo},
booktitle={International Conference on Machine Learning},
pages={813--824},
year={2021},
organization={PMLR}
}
``` |
fcakyon/timesformer-base-finetuned-k600 | fcakyon | 2022-12-10T22:09:46Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"timesformer",
"video-classification",
"vision",
"arxiv:2102.05095",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2022-12-10T21:53:59Z | ---
license: "cc-by-nc-4.0"
tags:
- vision
- video-classification
---
# TimeSformer (base-sized model, fine-tuned on Kinetics-600)
TimeSformer model pre-trained on [Kinetics-600](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer).
Disclaimer: The team releasing TimeSformer did not write a model card for this model so this model card has been written by [fcakyon](https://github.com/fcakyon).
## Intended uses & limitations
You can use the raw model for video classification into one of the 600 possible Kinetics-600 labels.
### How to use
Here is how to use this model to classify a video:
```python
from transformers import AutoImageProcessor, TimesformerForVideoClassification
import numpy as np
import torch
video = list(np.random.randn(8, 3, 224, 224))
processor = AutoImageProcessor.from_pretrained("fcakyon/timesformer-base-finetuned-k600")
model = TimesformerForVideoClassification.from_pretrained("fcakyon/timesformer-base-finetuned-k600")
inputs = processor(images=video, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/timesformer.html#).
### BibTeX entry and citation info
```bibtex
@inproceedings{bertasius2021space,
title={Is Space-Time Attention All You Need for Video Understanding?},
author={Bertasius, Gedas and Wang, Heng and Torresani, Lorenzo},
booktitle={International Conference on Machine Learning},
pages={813--824},
year={2021},
organization={PMLR}
}
``` |
osanseviero/q-FrozenLake-v1-4x4-noSlippery-test4 | osanseviero | 2022-12-10T22:08:08Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-10T22:07:59Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery-test4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="osanseviero/q-FrozenLake-v1-4x4-noSlippery-test4", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
osanseviero/q-FrozenLake-v1-4x4-noSlippery-test3 | osanseviero | 2022-12-10T22:02:25Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-10T21:58:46Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery-test3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="osanseviero/q-FrozenLake-v1-4x4-noSlippery-test3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
|
osanseviero/q-FrozenLake-v1-4x4-noSlippery-test | osanseviero | 2022-12-10T21:58:02Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-10T21:46:27Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery-test
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="osanseviero/q-FrozenLake-v1-4x4-noSlippery-test", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
|
xpariz10/ast-finetuned-audioset-10-10-0.4593-finetuning-ESC-50 | xpariz10 | 2022-12-10T21:55:51Z | 38 | 1 | transformers | [
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | audio-classification | 2022-12-07T17:18:03Z | ---
license: bsd-3-clause
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuning-ESC-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuning-ESC-50
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the ESC-50 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3356
- Accuracy: 0.9464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0621 | 1.0 | 28 | 0.4656 | 0.875 |
| 0.0694 | 2.0 | 56 | 0.3050 | 0.9107 |
| 0.0157 | 3.0 | 84 | 0.3356 | 0.9464 |
| 0.0038 | 4.0 | 112 | 0.3175 | 0.9286 |
| 0.0011 | 5.0 | 140 | 0.2579 | 0.9286 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Stxlla/fine-finetuned | Stxlla | 2022-12-10T21:37:31Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-12-10T16:13:44Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: fine-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-finetuned
This model is a fine-tuned version of [Stxlla/ko-en-following](https://huggingface.co/Stxlla/ko-en-following) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1211
- eval_bleu: 61.2672
- eval_gen_len: 11.3556
- eval_runtime: 2042.0344
- eval_samples_per_second: 16.208
- eval_steps_per_second: 1.013
- epoch: 2.0
- step: 33098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
joelkoch/ppo-Huggy | joelkoch | 2022-12-10T21:31:18Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-10T21:31:11Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: joelkoch/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Leilab/gender_class | Leilab | 2022-12-10T21:18:02Z | 1,020 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-10T21:17:51Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: gender_class
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9555555582046509
---
# gender_class
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### men

#### women
 |
TimothyKassis/ppo-LunarLander-v2 | TimothyKassis | 2022-12-10T20:09:11Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-10T20:08:44Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.85 +/- 16.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
michellejieli/inappropriate_text_classifier | michellejieli | 2022-12-10T20:08:21Z | 1,298 | 10 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"distilroberta",
"sentiment",
"NSFW",
"inappropriate",
"spam",
"twitter",
"reddit",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-10T20:00:03Z | ---
license: creativeml-openrail-m
language: "en"
tags:
- distilroberta
- sentiment
- NSFW
- inappropriate
- spam
- twitter
- reddit
widget:
- text: "I like you. You remind me of me when I was young and stupid."
- text: "I see you’ve set aside this special time to humiliate yourself in public."
- text: "Have a great weekend! See you next week!"
---
# Fine-tuned DistilBERT for NSFW Inappropriate Text Classification
# Model Description
DistilBERT is a transformer model that performs sentiment analysis. I fine-tuned the model on Reddit posts with the purpose of classifying not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. The model predicts 2 classes, which are NSFW or safe for work (SFW).
The model is a fine-tuned version of [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert).
It was fine-tuned on 19604 Reddit posts pulled from the [Comprehensive Abusiveness Detection Dataset](https://aclanthology.org/2021.conll-1.43/).
# How to Use
```python
from transformers import pipeline
classifier = pipeline("sentiment-analysis", model="michellejieli/inappropriate_text_classifier")
classifier("I see you’ve set aside this special time to humiliate yourself in public.")
```
```python
Output:
[{'label': 'NSFW', 'score': 0.9684491753578186}]
```
# Contact
Please reach out to [[email protected]](mailto:[email protected]) if you have any questions or feedback.
# Reference
```
Hoyun Song, Soo Hyun Ryu, Huije Lee, and Jong Park. 2021. A Large-scale Comprehensive Abusiveness Detection Dataset with Multifaceted Labels from Reddit. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 552–561, Online. Association for Computational Linguistics.
```
---
|
RamonAnkersmit/ppo-LunarLander-v2 | RamonAnkersmit | 2022-12-10T20:08:18Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-09T17:59:48Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.85 +/- 20.21
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
michellejieli/NSFW_text_classifier | michellejieli | 2022-12-10T19:59:37Z | 149,407 | 95 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"distilroberta",
"sentiment",
"NSFW",
"inappropriate",
"spam",
"twitter",
"reddit",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-10T01:42:56Z | ---
language: "en"
tags:
- distilroberta
- sentiment
- NSFW
- inappropriate
- spam
- twitter
- reddit
widget:
- text: "I like you. You remind me of me when I was young and stupid."
- text: "I see you’ve set aside this special time to humiliate yourself in public."
- text: "Have a great weekend! See you next week!"
---
# Fine-tuned DistilRoBERTa-base for NSFW Classification
# Model Description
DistilBERT is a transformer model that performs sentiment analysis. I fine-tuned the model on Reddit posts with the purpose of classifying not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. The model predicts 2 classes, which are NSFW or safe for work (SFW).
The model is a fine-tuned version of [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert).
It was fine-tuned on 14317 Reddit posts pulled from the (Reddit API) [https://praw.readthedocs.io/en/stable/].
# How to Use
```python
from transformers import pipeline
classifier = pipeline("sentiment-analysis", model="michellejieli/NSFW_text_classification")
classifier("I see you’ve set aside this special time to humiliate yourself in public.")
```
```python
Output:
[{'label': 'NSFW', 'score': 0.998853325843811}]
```
# Contact
Please reach out to [[email protected]](mailto:[email protected]) if you have any questions or feedback.
--- |
Wurzeldieb/painted_abstract | Wurzeldieb | 2022-12-10T19:54:40Z | 0 | 8 | null | [
"license:openrail",
"region:us"
] | null | 2022-12-10T17:10:58Z | ---
license: openrail
---
This is a a Textual Inversion Embedding to create an abstract style with a lot of detail, but still recognizable content.
Works with the 768x768 versions of Stable Diffusion 2.0 and 2.1
To use it put the painted_abstract.pt file in in your embeddings folder and use painted_abstract as promt
I recommend a cfg below 10 and maybe even a bit lower for 2.1, it gets more blocky the higher the cfg
usually I use 7 for 2.0 and 5 for 2.1
I also recommend using an anime upscaler like RealESRGAN_x4plus_anime_6B
For the examples I used different samplers and both 2.0 and 2.1, generated at 768x768 and upscaeld x4 with RealESRGAN_x4plus_anime_6B, otherwise untouched























|
eduyio/ppo-LunarLander-v2 | eduyio | 2022-12-10T19:07:49Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-10T19:07:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.90 +/- 18.83
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
admarcosai/ppo-Huggy | admarcosai | 2022-12-10T19:03:05Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-10T19:02:57Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: dmarcos/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
uzn/ddpm-trucks | uzn | 2022-12-10T18:50:41Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:uzn/truck",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-12-10T13:15:02Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: uzn/truck
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-trucks
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `uzn/truck` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/uzn/ddpm-trucks/tensorboard?#scalars)
|
alaaawad/sd-class-butterflies-64 | alaaawad | 2022-12-10T18:42:06Z | 3 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-10T18:41:21Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('alaaawad/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
Lilya/distilbert-base-uncased-finetuned-ner-invoiceSenderName | Lilya | 2022-12-10T18:39:24Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-09T14:43:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner-invoiceSenderName
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner-invoiceSenderName
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0254
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 0.0306 | 1.0 | 1956 | 0.0273 | 0.0 | 0.0 | 0.0 | 0.9901 |
| 0.0195 | 2.0 | 3912 | 0.0240 | 0.0 | 0.0 | 0.0 | 0.9914 |
| 0.0143 | 3.0 | 5868 | 0.0251 | 0.0 | 0.0 | 0.0 | 0.9921 |
| 0.0107 | 4.0 | 7824 | 0.0254 | 0.0 | 0.0 | 0.0 | 0.9924 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 2.3.2
- Tokenizers 0.10.3
|
alaaawad/sd-class-butterflies-32 | alaaawad | 2022-12-10T18:03:24Z | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-10T18:01:49Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('alaaawad/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
austinzheng/ppo-Huggy | austinzheng | 2022-12-10T18:01:49Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-10T18:01:26Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: austinzheng/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bitsanlp/roberta-retrained-250k | bitsanlp | 2022-12-10T17:18:00Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-12-10T15:35:24Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-retrained-250k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-retrained-250k
This model is a fine-tuned version of [bitsanlp/roberta-retrained_100k](https://huggingface.co/bitsanlp/roberta-retrained_100k) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
yonas/stt_rw_conformer_ctc_large | yonas | 2022-12-10T17:16:27Z | 12 | 0 | nemo | [
"nemo",
"automatic-speech-recognition",
"speech",
"Kinyarwanda",
"audio",
"CTC",
"Conformer",
"Transformer",
"NeMo",
"pytorch",
"rw",
"dataset:mozilla-foundation/common_voice_11_0",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-12-02T13:08:08Z | ---
language:
- rw
license: cc-by-4.0
library_name: nemo
datasets:
- mozilla-foundation/common_voice_11_0
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- Kinyarwanda
- audio
- CTC
- Conformer
- Transformer
- NeMo
- pytorch
model-index:
- name: stt_rw_conformer_ctc_large
results: []
---
## Model Overview
<DESCRIBE IN ONE LINE THE MODEL AND ITS USE>
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.ASRModel.from_pretrained("yonas/stt_rw_conformer_ctc_large")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="yonas/stt_rw_conformer_ctc_large" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
<ADD SOME INFORMATION ABOUT THE ARCHITECTURE>
## Training
<ADD INFORMATION ABOUT HOW THE MODEL WAS TRAINED - HOW MANY EPOCHS, AMOUNT OF COMPUTE ETC>
### Datasets
<LIST THE NAME AND SPLITS OF DATASETS USED TO TRAIN THIS MODEL (ALONG WITH LANGUAGE AND ANY ADDITIONAL INFORMATION)>
## Performance
<LIST THE SCORES OF THE MODEL -
OR
USE THE Hugging Face Evaluate LiBRARY TO UPLOAD METRICS>
## Limitations
<DECLARE ANY POTENTIAL LIMITATIONS OF THE MODEL>
Eg:
Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## References
<ADD ANY REFERENCES HERE AS NEEDED>
[1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
Subsets and Splits