modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-25 06:27:54
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 495
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-25 06:24:22
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
kparker/ppo-Huggy | kparker | 2023-01-18T12:54:21Z | 13 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-01-18T12:54:12Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: kparker/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nabdan/mnist | nabdan | 2023-01-18T12:39:37Z | 6 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2023-01-18T12:38:08Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('nabdan/mnist')
image = pipeline().images[0]
image
```
|
mqy/mt5-small-finetuned-18jan-4 | mqy | 2023-01-18T12:29:36Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-01-18T11:02:26Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-18jan-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-18jan-4
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6070
- Rouge1: 5.8518
- Rouge2: 0.3333
- Rougel: 5.8423
- Rougelsum: 5.7268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 7.6303 | 1.0 | 60 | 3.0842 | 6.1768 | 1.2345 | 6.2047 | 6.1838 |
| 3.8899 | 2.0 | 120 | 2.7540 | 7.9407 | 1.0 | 7.8852 | 7.9087 |
| 3.4335 | 3.0 | 180 | 2.7391 | 8.5431 | 0.5667 | 8.5448 | 8.4406 |
| 3.2524 | 4.0 | 240 | 2.6775 | 8.7375 | 0.4167 | 8.6926 | 8.569 |
| 3.0853 | 5.0 | 300 | 2.6776 | 7.7823 | 0.1667 | 7.7548 | 7.6573 |
| 2.974 | 6.0 | 360 | 2.6641 | 8.375 | 0.1667 | 8.3333 | 8.2167 |
| 2.9018 | 7.0 | 420 | 2.6233 | 7.2137 | 0.3333 | 7.147 | 7.0595 |
| 2.859 | 8.0 | 480 | 2.6238 | 6.6125 | 0.4167 | 6.656 | 6.4595 |
| 2.8123 | 9.0 | 540 | 2.5961 | 6.4262 | 0.3333 | 6.3682 | 6.2131 |
| 2.7843 | 10.0 | 600 | 2.6070 | 5.8518 | 0.3333 | 5.8423 | 5.7268 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
justinpinkney/lhq-sg2-1024 | justinpinkney | 2023-01-18T12:28:25Z | 0 | 6 | null | [
"license:mit",
"region:us"
] | null | 2023-01-18T12:12:44Z | ---
license: mit
---
# StyleGAN2 LHQ 1024
A [StyleGAN2 config-f](https://github.com/NVlabs/stylegan2-ada-pytorch) model trained on the [LHQ dataset](https://github.com/universome/alis).
Trained for 2.06 Million images FID=2.56
 |
Rakib/roberta-base-on-cuad | Rakib | 2023-01-18T12:18:53Z | 19,027 | 7 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"legal-contract-review",
"cuad",
"en",
"dataset:cuad",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
language:
- en
license: mit
datasets:
- cuad
pipeline_tag: question-answering
tags:
- legal-contract-review
- roberta
- cuad
library_name: transformers
---
# Model Card for roberta-base-on-cuad
# Model Details
## Model Description
- **Developed by:** Mohammed Rakib
- **Shared by [Optional]:** More information needed
- **Model type:** Question Answering
- **Language(s) (NLP):** en
- **License:** MIT
- **Related Models:**
- **Parent Model:** RoBERTa
- **Resources for more information:**
- GitHub Repo: [defactolaw](https://github.com/afra-tech/defactolaw)
- Associated Paper: [An Open Source Contractual Language Understanding Application Using Machine Learning](https://aclanthology.org/2022.lateraisse-1.6/)
# Uses
## Direct Use
This model can be used for the task of Question Answering on Legal Documents.
# Training Details
Read: [An Open Source Contractual Language Understanding Application Using Machine Learning](https://aclanthology.org/2022.lateraisse-1.6/)
for detailed information on training procedure, dataset preprocessing and evaluation.
## Training Data
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
### Factors
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
Used V100/P100 from Google Colab Pro
### Software
Python, Transformers
# Citation
**BibTeX:**
```
@inproceedings{nawar-etal-2022-open,
title = "An Open Source Contractual Language Understanding Application Using Machine Learning",
author = "Nawar, Afra and
Rakib, Mohammed and
Hai, Salma Abdul and
Haq, Sanaulla",
booktitle = "Proceedings of the First Workshop on Language Technology and Resources for a Fair, Inclusive, and Safe Society within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lateraisse-1.6",
pages = "42--50",
abstract = "Legal field is characterized by its exclusivity and non-transparency. Despite the frequency and relevance of legal dealings, legal documents like contracts remains elusive to non-legal professionals for the copious usage of legal jargon. There has been little advancement in making legal contracts more comprehensible. This paper presents how Machine Learning and NLP can be applied to solve this problem, further considering the challenges of applying ML to the high length of contract documents and training in a low resource environment. The largest open-source contract dataset so far, the Contract Understanding Atticus Dataset (CUAD) is utilized. Various pre-processing experiments and hyperparameter tuning have been carried out and we successfully managed to eclipse SOTA results presented for models in the CUAD dataset trained on RoBERTa-base. Our model, A-type-RoBERTa-base achieved an AUPR score of 46.6{\%} compared to 42.6{\%} on the original RoBERT-base. This model is utilized in our end to end contract understanding application which is able to take a contract and highlight the clauses a user is looking to find along with it{'}s descriptions to aid due diligence before signing. Alongside digital, i.e. searchable, contracts the system is capable of processing scanned, i.e. non-searchable, contracts using tesseract OCR. This application is aimed to not only make contract review a comprehensible process to non-legal professionals, but also to help lawyers and attorneys more efficiently review contracts.",
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Mohammed Rakib in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("Rakib/roberta-base-on-cuad")
model = AutoModelForQuestionAnswering.from_pretrained("Rakib/roberta-base-on-cuad")
```
</details> |
Kushrjain/results_6_to_11_with_embedding2 | Kushrjain | 2023-01-18T12:09:13Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-16T05:35:14Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-codemixed-uncased-sentiment-hatespeech-multilanguage
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_6_to_11_with_embedding2
This model is a fine-tuned version of [rohanrajpal/bert-base-codemixed-uncased-sentiment](https://huggingface.co/rohanrajpal/bert-base-codemixed-uncased-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3507
- Accuracy: 0.8759
- Precision: 0.8751
- Recall: 0.8759
- F1: 0.8755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3889 | 1.0 | 1460 | 0.3761 | 0.8335 | 0.8484 | 0.8335 | 0.8371 |
| 0.3273 | 2.0 | 2920 | 0.3196 | 0.8542 | 0.8602 | 0.8542 | 0.8561 |
| 0.2955 | 3.0 | 4380 | 0.3116 | 0.8645 | 0.8644 | 0.8645 | 0.8645 |
| 0.27 | 4.0 | 5840 | 0.3014 | 0.8704 | 0.8695 | 0.8704 | 0.8699 |
| 0.2601 | 5.0 | 7300 | 0.3285 | 0.8676 | 0.8714 | 0.8676 | 0.8689 |
| 0.2376 | 6.0 | 8760 | 0.3147 | 0.8726 | 0.8737 | 0.8726 | 0.8731 |
| 0.213 | 7.0 | 10220 | 0.3103 | 0.8699 | 0.8714 | 0.8699 | 0.8706 |
| 0.2013 | 8.0 | 11680 | 0.3424 | 0.8737 | 0.8733 | 0.8737 | 0.8735 |
| 0.192 | 9.0 | 13140 | 0.3398 | 0.8758 | 0.8746 | 0.8758 | 0.8750 |
| 0.1763 | 10.0 | 14600 | 0.3507 | 0.8759 | 0.8751 | 0.8759 | 0.8755 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
TomUdale/sec_example | TomUdale | 2023-01-18T11:56:19Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"finance",
"legal",
"en",
"dataset:tner/fin",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-01-18T10:35:15Z | ---
datasets:
- tner/fin
language:
- en
tags:
- finance
- legal
--- |
rishipatel92/a2c-PandaReachDense-v2 | rishipatel92 | 2023-01-18T11:55:54Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T11:53:36Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.89 +/- 0.29
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Dharkelf/a2c-PandaReachDense-v2_2 | Dharkelf | 2023-01-18T11:51:17Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T11:49:06Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -9.66 +/- 5.01
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sd-concepts-library/barbosa | sd-concepts-library | 2023-01-18T11:28:06Z | 0 | 2 | null | [
"license:mit",
"region:us"
] | null | 2023-01-18T11:26:06Z | ---
license: mit
---
### barbosa on Stable Diffusion
This is the `<barbosa>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
---
These are watercolor paintings by Veronika Ulychny.
Here is the new concept you will be able to use as a `style`:






|
jimypbr/bart-base-finetuned-xsum | jimypbr | 2023-01-18T11:16:54Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"optimum_graphcore",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-01-18T10:29:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: bart-base-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-xsum
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1176 | 1.0 | 3188 | 1.8584 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.13.0+cpu
- Datasets 2.8.0
- Tokenizers 0.12.1
|
Adapting/KeyBartAdapter | Adapting | 2023-01-18T11:12:55Z | 2 | 0 | null | [
"pytorch",
"Keyphrase Generation",
"license:mit",
"region:us"
] | null | 2022-10-19T09:35:10Z | ---
license: mit
tags:
- Keyphrase Generation
---
# Usage
```python
!pip install KeyBartAdapter
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from models import KeyBartAdapter
model = KeyBartAdapter.from_pretrained('Adapting/KeyBartAdapter', revision = '3aee5ecf1703b9955ab0cd1b23208cc54eb17fce',adapter_hid_dim =32)
tokenizer = AutoTokenizer.from_pretrained("bloomberg/KeyBART")
```
- adapter layer hd 512 init model: `e38c77df86e0e289e5846455e226f4e9af09ef8e`
- adapter layer hd 256 init model: `c6f3b357d953dcb5943b6333a0f9f941b832477`
- adapter layer hd 128 init model: `f88116fa1c995f07ccd5ad88862e0aa4f162b1ea`
- adapter layer hd 64 init model: `f7e8c6323b8d5822667ddc066ffe19ac7b810f4a`
- adapter layer hd 32 init model: `24ec15daef1670fb9849a56517a6886b69b652f6`
**1. inference**
```
from transformers import Text2TextGenerationPipeline
pipe = Text2TextGenerationPipeline(model=model,tokenizer=tokenizer)
abstract = '''Non-referential face image quality assessment methods have gained popularity as a pre-filtering step on face recognition systems. In most of them, the quality score is usually designed with face matching in mind. However, a small amount of work has been done on measuring their impact and usefulness on Presentation Attack Detection (PAD). In this paper, we study the effect of quality assessment methods on filtering bona fide and attack samples, their impact on PAD systems, and how the performance of such systems is improved when training on a filtered (by quality) dataset. On a Vision Transformer PAD algorithm, a reduction of 20% of the training dataset by removing lower quality samples allowed us to improve the BPCER by 3% in a cross-dataset test.'''
pipe(abstract)
``` |
MilaNLProc/bert-base-uncased-ear-misogyny | MilaNLProc | 2023-01-18T11:02:51Z | 1,901 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"misogyny detection",
"abusive language",
"hate speech",
"offensive language",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-16T23:31:46Z | ---
language:
- en
license: gpl-3.0
tags:
- misogyny detection
- abusive language
- hate speech
- offensive language
widget:
- text: I believe women need to be protected more.
example_title: Misogyny Detection Example 1
pipeline_tag: text-classification
---
# Entropy-based Attention Regularization 👂
This is an English BERT fine-tuned with [Entropy-based Attention Regularization](https://aclanthology.org/2022.findings-acl.88/) to reduce lexical overfitting to specific words on the task of Misogyny Identification.
Use this model if you want a debiased alternative to a BERT classifier.
Please refer to the paper to know all the training details.
## Dataset
The model was fine-tuned on the [Automatic Misogyny Identification dataset](https://ceur-ws.org/Vol-2263/paper009.pdf).
## Model
This model is the fine-tuned version of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) model.
We trained a total of three versions for Italian and English.
| Model | Download |
| ------ | -------------------------|
| `bert-base-uncased-ear-misogyny` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-misogyny) |
| `bert-base-uncased-ear-mlma` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-mlma) |
| `bert-base-uncased-ear-misogyny-italian` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-misogyny-italian) |
# Authors
- [Giuseppe Attanasio](https://gattanasio.cc/)
- [Debora Nozza](http://dnozza.github.io/)
- [Dirk Hovy](https://federicobianchi.io/)
- [Elena Baralis](https://dbdmg.polito.it/wordpress/people/elena-baralis/)
# Citation
Please use the following BibTeX entry if you use this model in your project:
```
@inproceedings{attanasio-etal-2022-entropy,
title = "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists",
author = "Attanasio, Giuseppe and
Nozza, Debora and
Hovy, Dirk and
Baralis, Elena",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.88",
doi = "10.18653/v1/2022.findings-acl.88",
pages = "1105--1119",
abstract = "Natural Language Processing (NLP) models risk overfitting to specific terms in the training data, thereby reducing their performance, fairness, and generalizability. E.g., neural hate speech detection models are strongly influenced by identity terms like gay, or women, resulting in false positives, severe unintended bias, and lower performance.Most mitigation techniques use lists of identity terms or samples from the target domain during training. However, this approach requires a-priori knowledge and introduces further bias if important terms are neglected.Instead, we propose a knowledge-free Entropy-based Attention Regularization (EAR) to discourage overfitting to training-specific terms. An additional objective function penalizes tokens with low self-attention entropy.We fine-tune BERT via EAR: the resulting model matches or exceeds state-of-the-art performance for hate speech classification and bias metrics on three benchmark corpora in English and Italian.EAR also reveals overfitting terms, i.e., terms most likely to induce bias, to help identify their effect on the model, task, and predictions.",
}
```
# Limitations
Entropy-Attention Regularization mitigates lexical overfitting but does not completely remove it. We expect the model still to show biases, e.g., peculiar keywords that induce a specific prediction regardless of the context.
Please refer to our paper for a quantitative evaluation of this mitigation.
## License
[GNU GPLv3](https://choosealicense.com/licenses/gpl-3.0/) |
MilaNLProc/bert-base-uncased-ear-misogyny-italian | MilaNLProc | 2023-01-18T11:02:33Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"misogyny detection",
"abusive language",
"hate speech",
"offensive language",
"it",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-16T23:39:00Z | ---
language: it
license: gpl-3.0
tags:
- misogyny detection
- abusive language
- hate speech
- offensive language
widget:
- text: Apprezzo il lavoro delle donne nella nostra comunità.
example_title: Misogyny Detection Example 1
pipeline_tag: text-classification
---
# Entropy-based Attention Regularization 👂
This is an Italian BERT fine-tuned with [Entropy-based Attention Regularization](https://aclanthology.org/2022.findings-acl.88/) to reduce lexical overfitting to specific words on the task of Misogyny Identification.
Use this model if you want a debiased alternative to a BERT classifier.
Please refer to the paper to know all the training details.
## Dataset
The model was fine-tuned on the [Italian Automatic Misogyny Identification dataset](https://ceur-ws.org/Vol-2765/paper161.pdf).
## Model
This model is the fine-tuned version of the Italian [dbmdz/bert-base-italian-uncased](https://huggingface.co/dbmdz/bert-base-italian-uncased) model.
We trained a total of three versions for Italian and English.
| Model | Download |
| ------ | -------------------------|
| `bert-base-uncased-ear-misogyny` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-misogyny) |
| `bert-base-uncased-ear-mlma` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-mlma) |
| `bert-base-uncased-ear-misogyny-italian` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-misogyny-italian) |
# Authors
- [Giuseppe Attanasio](https://gattanasio.cc/)
- [Debora Nozza](http://dnozza.github.io/)
- [Dirk Hovy](https://federicobianchi.io/)
- [Elena Baralis](https://dbdmg.polito.it/wordpress/people/elena-baralis/)
# Citation
Please use the following BibTeX entry if you use this model in your project:
```
@inproceedings{attanasio-etal-2022-entropy,
title = "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists",
author = "Attanasio, Giuseppe and
Nozza, Debora and
Hovy, Dirk and
Baralis, Elena",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.88",
doi = "10.18653/v1/2022.findings-acl.88",
pages = "1105--1119",
abstract = "Natural Language Processing (NLP) models risk overfitting to specific terms in the training data, thereby reducing their performance, fairness, and generalizability. E.g., neural hate speech detection models are strongly influenced by identity terms like gay, or women, resulting in false positives, severe unintended bias, and lower performance.Most mitigation techniques use lists of identity terms or samples from the target domain during training. However, this approach requires a-priori knowledge and introduces further bias if important terms are neglected.Instead, we propose a knowledge-free Entropy-based Attention Regularization (EAR) to discourage overfitting to training-specific terms. An additional objective function penalizes tokens with low self-attention entropy.We fine-tune BERT via EAR: the resulting model matches or exceeds state-of-the-art performance for hate speech classification and bias metrics on three benchmark corpora in English and Italian.EAR also reveals overfitting terms, i.e., terms most likely to induce bias, to help identify their effect on the model, task, and predictions.",
}
```
# Limitations
Entropy-Attention Regularization mitigates lexical overfitting but does not completely remove it. We expect the model still to show biases, e.g., peculiar keywords that induce a specific prediction regardless of the context.
Please refer to our paper for a quantitative evaluation of this mitigation.
## License
[GNU GPLv3](https://choosealicense.com/licenses/gpl-3.0/) |
MilaNLProc/bert-base-uncased-ear-mlma | MilaNLProc | 2023-01-18T11:02:12Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"misogyny detection",
"abusive language",
"hate speech",
"offensive language",
"en",
"dataset:nedjmaou/MLMA_hate_speech",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-17T11:25:19Z | ---
language:
- en
license: gpl-3.0
tags:
- misogyny detection
- abusive language
- hate speech
- offensive language
widget:
- text: I believe religious minorities need to be protected more.
example_title: Hate Speech Detection Example 1
pipeline_tag: text-classification
datasets:
- nedjmaou/MLMA_hate_speech
---
# Entropy-based Attention Regularization 👂
This is an English BERT fine-tuned with [Entropy-based Attention Regularization](https://aclanthology.org/2022.findings-acl.88/) to reduce lexical overfitting to specific words on the task of Misogyny Identification.
Use this model if you want a debiased alternative to a BERT classifier.
Please refer to the paper to know all the training details.
## Dataset
The model was fine-tuned on the English part of the [MLMA dataset](https://aclanthology.org/D19-1474/).
## Model
This model is the fine-tuned version of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) model.
We trained a total of three versions for Italian and English.
| Model | Download |
| ------ | -------------------------|
| `bert-base-uncased-ear-misogyny` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-misogyny) |
| `bert-base-uncased-ear-mlma` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-mlma) |
| `bert-base-uncased-ear-misogyny-italian` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-misogyny-italian) |
# Authors
- [Giuseppe Attanasio](https://gattanasio.cc/)
- [Debora Nozza](http://dnozza.github.io/)
- [Dirk Hovy](https://federicobianchi.io/)
- [Elena Baralis](https://dbdmg.polito.it/wordpress/people/elena-baralis/)
# Citation
Please use the following BibTeX entry if you use this model in your project:
```
@inproceedings{attanasio-etal-2022-entropy,
title = "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists",
author = "Attanasio, Giuseppe and
Nozza, Debora and
Hovy, Dirk and
Baralis, Elena",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.88",
doi = "10.18653/v1/2022.findings-acl.88",
pages = "1105--1119",
abstract = "Natural Language Processing (NLP) models risk overfitting to specific terms in the training data, thereby reducing their performance, fairness, and generalizability. E.g., neural hate speech detection models are strongly influenced by identity terms like gay, or women, resulting in false positives, severe unintended bias, and lower performance.Most mitigation techniques use lists of identity terms or samples from the target domain during training. However, this approach requires a-priori knowledge and introduces further bias if important terms are neglected.Instead, we propose a knowledge-free Entropy-based Attention Regularization (EAR) to discourage overfitting to training-specific terms. An additional objective function penalizes tokens with low self-attention entropy.We fine-tune BERT via EAR: the resulting model matches or exceeds state-of-the-art performance for hate speech classification and bias metrics on three benchmark corpora in English and Italian.EAR also reveals overfitting terms, i.e., terms most likely to induce bias, to help identify their effect on the model, task, and predictions.",
}
```
# Limitations
Entropy-Attention Regularization mitigates lexical overfitting but does not completely remove it. We expect the model still to show biases, e.g., peculiar keywords that induce a specific prediction regardless of the context.
Please refer to our paper for a quantitative evaluation of this mitigation.
# License
[GNU GPLv3](https://choosealicense.com/licenses/gpl-3.0/) |
NoNameFound/Pyramids-ppo | NoNameFound | 2023-01-18T10:59:41Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-01-18T10:59:33Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: Kaushik3497/Pyramids-ppo
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mqy/mt5-small-finetuned-18jan-3 | mqy | 2023-01-18T10:42:58Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-01-18T06:43:00Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-18jan-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-18jan-3
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6115
- Rouge1: 7.259
- Rouge2: 0.3667
- Rougel: 7.1595
- Rougelsum: 7.156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 7.1947 | 1.0 | 60 | 3.1045 | 5.91 | 0.8583 | 5.8687 | 5.8123 |
| 3.8567 | 2.0 | 120 | 2.7744 | 8.0065 | 0.4524 | 8.0204 | 7.85 |
| 3.4346 | 3.0 | 180 | 2.7319 | 7.5954 | 0.4524 | 7.5204 | 7.4833 |
| 3.219 | 4.0 | 240 | 2.6736 | 8.5329 | 0.3333 | 8.487 | 8.312 |
| 3.0836 | 5.0 | 300 | 2.6583 | 8.3405 | 0.5667 | 8.2003 | 8.0543 |
| 2.9713 | 6.0 | 360 | 2.6516 | 8.8421 | 0.1667 | 8.7597 | 8.6754 |
| 2.9757 | 7.0 | 420 | 2.6369 | 8.04 | 0.3667 | 8.0018 | 7.8489 |
| 2.8321 | 8.0 | 480 | 2.6215 | 6.8739 | 0.3667 | 6.859 | 6.7917 |
| 2.794 | 9.0 | 540 | 2.6090 | 7.0738 | 0.4167 | 7.0232 | 6.9619 |
| 2.7695 | 10.0 | 600 | 2.6115 | 7.259 | 0.3667 | 7.1595 | 7.156 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
zhenyabeznisko/sd-class-butterflies-32 | zhenyabeznisko | 2023-01-18T10:26:54Z | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2023-01-18T10:26:37Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('zhenyabeznisko/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
eolang/DRL-SpaceInvadersNoFrameskip-v4 | eolang | 2023-01-18T10:26:49Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T10:26:14Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 513.00 +/- 148.75
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga eolang -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga eolang -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga eolang
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
happycoding/a2c-AntBulletEnv-v0 | happycoding | 2023-01-18T10:24:50Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T10:23:42Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1667.55 +/- 75.52
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
babakc/Reinforce-Pixelcopter-PLE-v0 | babakc | 2023-01-18T10:09:43Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T06:14:28Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 21.50 +/- 12.61
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Rakib/whisper-tiny-bn | Rakib | 2023-01-18T10:05:55Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"bn",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-01-17T08:34:18Z | ---
language:
- bn
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny Bengali
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 bn
type: mozilla-foundation/common_voice_11_0
config: bn
split: test
args: bn
metrics:
- name: Wer
type: wer
value: 32.89771261927907
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Bengali
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_11_0 bn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2314
- Wer: 32.8977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3362 | 0.96 | 1000 | 0.3536 | 45.0860 |
| 0.2395 | 1.91 | 2000 | 0.2745 | 37.1714 |
| 0.205 | 2.87 | 3000 | 0.2485 | 34.7353 |
| 0.1795 | 3.83 | 4000 | 0.2352 | 33.2469 |
| 0.1578 | 4.78 | 5000 | 0.2314 | 32.8977 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
Tomor0720/deberta-base-finetuned-sst2 | Tomor0720 | 2023-01-18T10:00:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-18T09:22:08Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: deberta-base-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: sst2
split: train
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9495412844036697
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-sst2
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2411
- Accuracy: 0.9495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1946 | 1.0 | 4210 | 0.2586 | 0.9278 |
| 0.1434 | 2.0 | 8420 | 0.2296 | 0.9472 |
| 0.1025 | 3.0 | 12630 | 0.2411 | 0.9495 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
GDJ1978/reddit-top-posts-jan23 | GDJ1978 | 2023-01-18T09:59:56Z | 0 | 0 | null | [
"region:us"
] | null | 2023-01-14T10:31:17Z | midjourneygirl, genderrevealnuke, stelfiepics, mods, animegirl, analog, comiccowboys, oldman, spacegirl, protogirls, propaganda |
smko77/LunarLander-v2 | smko77 | 2023-01-18T09:14:02Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T09:13:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.68 +/- 22.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
keshan/a2c-PandaReachDense-v2 | keshan | 2023-01-18T09:10:55Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T04:51:45Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.37 +/- 0.36
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mbdmbs/Marsey-Diffusion-v1 | mbdmbs | 2023-01-18T09:04:34Z | 0 | 0 | null | [
"art",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-01-18T08:26:12Z | ---
license: creativeml-openrail-m
language:
- en
tags:
- art
pipeline_tag: text-to-image
---
# Marsey Diffusion v1
Marsey Diffusion is a [Dreambooth](https://dreambooth.github.io/) model trained on Marsey emotes from rDrama.net. It is based on [Stable Diffusion v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
## Usage
To generate novel Marseys, use the keyword `rdmarsey` in your prompt.
**Example prompt:** `rdmarsey giving a thumbs up`
## Samples
**Sample of input images:**

**Sample of outputs (cherrypicked):**
 |
kjmann/ppo-Huggy | kjmann | 2023-01-18T09:02:01Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-01-18T09:01:53Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: kjmann/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
96harsh56/upload_test | 96harsh56 | 2023-01-18T08:31:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"en",
"dataset:squad_v2",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-01-18T04:54:50Z | ---
license: apache-2.0
datasets:
- squad_v2
language:
- en
metrics:
- squad_v2
pipeline_tag: question-answering
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
Srivathsava/ddpm-celb-faces | Srivathsava | 2023-01-18T08:19:27Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2023-01-12T10:29:15Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: /content/drive/MyDrive/ColabNotebooks/img_align_celeba
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-celb-faces
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `/content/drive/MyDrive/ColabNotebooks/img_align_celeba` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Srivathsava/ddpm-celb-faces/tensorboard?#scalars)
|
caffsean/gpt2-the-economist | caffsean | 2023-01-18T08:14:27Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-01-18T07:35:46Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-the-economist
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-the-economist
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 9820
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8737 | 1.0 | 1228 | 3.7960 |
| 3.6767 | 2.0 | 2456 | 3.6544 |
| 3.5561 | 3.0 | 3684 | 3.5948 |
| 3.431 | 4.0 | 4912 | 3.5495 |
| 3.3127 | 5.0 | 6140 | 3.5285 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
imflash217/SpaceInvaders_NoFrameskip_v4 | imflash217 | 2023-01-18T07:25:05Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T07:24:18Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 807.00 +/- 311.73
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga imflash217 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga imflash217 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga imflash217
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
msgerasyov/a2c-AntBulletEnv-v0 | msgerasyov | 2023-01-18T07:13:35Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T07:12:25Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1189.62 +/- 486.75
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gzerveas/CODER-CoCondenser | gzerveas | 2023-01-18T07:09:39Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"Information Retrieval",
"en",
"dataset:ms_marco",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-01-18T06:40:43Z | ---
datasets:
- ms_marco
language:
- en
metrics:
- MRR
- nDCG
tags:
- Information Retrieval
--- |
farukbuldur/ppo-LunarLander-v2 | farukbuldur | 2023-01-18T07:07:23Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T07:06:58Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.42 +/- 21.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rootacess/q-FrozenLake-v1-4x4-noSlippery | rootacess | 2023-01-18T06:47:31Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T06:47:27Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rootacess/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NoCrypt/SomethingV1 | NoCrypt | 2023-01-18T06:25:09Z | 0 | 4 | null | [
"region:us"
] | null | 2023-01-17T06:15:09Z | Anime text-to-image model that focused on very vibrant and saturated images


 |
aplnestrella/pegasus-samsum-18-2 | aplnestrella | 2023-01-18T05:49:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-01-18T04:42:38Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum-18-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum-18-2
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 18
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 36
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
shi-labs/dinat-large-in22k-in1k-224 | shi-labs | 2023-01-18T05:16:44Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"dinat",
"image-classification",
"vision",
"dataset:imagenet-21k",
"dataset:imagenet-1k",
"arxiv:2209.15001",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-11-18T22:05:43Z | ---
license: mit
tags:
- vision
- image-classification
datasets:
- imagenet-21k
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# DiNAT (large variant)
DiNAT-Large with a 7x7 kernel pre-trained on ImageNet-21K, and fine-tuned on ImageNet-1K at 224x224.
It was introduced in the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Hassani et al. and first released in [this repository](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer).
## Model description
DiNAT is a hierarchical vision transformer based on Neighborhood Attention (NA) and its dilated variant (DiNA).
Neighborhood Attention is a restricted self attention pattern in which each token's receptive field is limited to its nearest neighboring pixels.
NA and DiNA are therefore sliding-window attention patterns, and as a result are highly flexible and maintain translational equivariance.
They come with PyTorch implementations through the [NATTEN](https://github.com/SHI-Labs/NATTEN/) package.

[Source](https://paperswithcode.com/paper/dilated-neighborhood-attention-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=dinat) to look for
fine-tuned versions on a task that interests you.
### Example
Here is how to use this model to classify an image from the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, DinatForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoImageProcessor.from_pretrained("shi-labs/dinat-large-in22k-in1k-224")
model = DinatForImageClassification.from_pretrained("shi-labs/dinat-large-in22k-in1k-224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more examples, please refer to the [documentation](https://huggingface.co/transformers/model_doc/dinat.html#).
### Requirements
Other than transformers, this model requires the [NATTEN](https://shi-labs.com/natten) package.
If you're on Linux, you can refer to [shi-labs.com/natten](https://shi-labs.com/natten) for instructions on installing with pre-compiled binaries (just select your torch build to get the correct wheel URL).
You can alternatively use `pip install natten` to compile on your device, which may take up to a few minutes.
Mac users only have the latter option (no pre-compiled binaries).
Refer to [NATTEN's GitHub](https://github.com/SHI-Labs/NATTEN/) for more information.
### BibTeX entry and citation info
```bibtex
@article{hassani2022dilated,
title = {Dilated Neighborhood Attention Transformer},
author = {Ali Hassani and Humphrey Shi},
year = 2022,
url = {https://arxiv.org/abs/2209.15001},
eprint = {2209.15001},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
``` |
imflash217/q-FrozenLake-v1-8x8-noSlippery | imflash217 | 2023-01-18T05:05:30Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T04:43:51Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="imflash217/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
OnYourLeft/hate_speech_detection_model | OnYourLeft | 2023-01-18T05:01:20Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-18T04:59:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: hate_speech_detection_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hate_speech_detection_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0923
- Accuracy: 0.97
- F1: 0.9698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Msaltz/8r | Msaltz | 2023-01-18T05:00:02Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-18T05:00:02Z | ---
license: creativeml-openrail-m
---
|
kejian/blurry-conditional | kejian | 2023-01-18T04:59:08Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-01-15T18:18:46Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: blurry-conditional
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# blurry-conditional
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12588
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.1,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0},
'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'batch_size': 128,
'every_n_steps': 384,
'force_call_on': [12588],
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'bad_words_ids': [[32769]],
'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_hits_threshold': 0,
'num_samples': 2048,
'prefix': '<|aligned|>',
'use_prompt_for_scoring': False},
{'display_as_html': True,
'generate_kwargs': {'bad_words_ids': [[32769]],
'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_hits_threshold': 0,
'num_samples': 2048,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'every_n_steps': 384,
'force_call_on': [12588],
'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>',
'should_insert_prefix': True},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': 'cf05a2b0558c03b08c78f07662c22989785b9520'},
'num_additional_tokens': 2,
'path_or_name': 'kejian/mighty-mle'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'kejian/mighty-mle',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'blurry-conditional',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 12588,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/1gpnt88g |
charanhu/finetuned-bert-mrpc | charanhu | 2023-01-18T04:55:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-18T04:42:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: finetuned-bert-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8602941176470589
- name: F1
type: f1
value: 0.9035532994923857
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4148
- Accuracy: 0.8603
- F1: 0.9036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5315 | 1.0 | 230 | 0.3698 | 0.8382 | 0.8862 |
| 0.3 | 2.0 | 460 | 0.3677 | 0.8431 | 0.8919 |
| 0.1575 | 3.0 | 690 | 0.4148 | 0.8603 | 0.9036 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
OnYourLeft/sentiment_analysis_model | OnYourLeft | 2023-01-18T04:42:36Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-18T04:34:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sentiment_analysis_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_analysis_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6018
- Accuracy: 0.8513
- F1: 0.8515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
keshan/a2c-AntBulletEnv-v0 | keshan | 2023-01-18T04:08:00Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T04:06:52Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1659.20 +/- 164.55
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
babakc/Reinforce-Cart | babakc | 2023-01-18T04:05:00Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T04:04:52Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cart
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
SALT-NLP/pfadapter-roberta-base-stsb-combined-value | SALT-NLP | 2023-01-18T03:49:00Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"en",
"dataset:glue",
"region:us"
] | null | 2022-09-16T04:35:38Z | ---
tags:
- roberta
- adapter-transformers
datasets:
- glue
language:
- en
---
# Adapter `SALT-NLP/pfadapter-roberta-base-stsb-combined-value` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [glue](https://huggingface.co/datasets/glue/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("SALT-NLP/pfadapter-roberta-base-stsb-combined-value", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
peter1133/ppo-Pyramids | peter1133 | 2023-01-18T03:24:49Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-01-18T03:22:42Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: peter1133/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
soren127/GeFeMi_v1-1 | soren127 | 2023-01-18T03:14:43Z | 4 | 1 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-01-18T03:00:30Z | ---
license: creativeml-openrail-m
---
|
Lalalan/zeng_fanzhi | Lalalan | 2023-01-18T03:04:37Z | 0 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-01-17T08:01:01Z | This is a fined tuned model of artist Zeng Fanzhi's style.
Here are some sample images:
<img src="https://huggingface.co/Lalalan/zeng_fanzhi/resolve/main/c51e7db1d5271ad2b17dff4126228ba.png">
<img src="https://huggingface.co/Lalalan/zeng_fanzhi/resolve/main/395f2934c358011eb68fa38c406b8b8.png">
<img src="https://huggingface.co/Lalalan/zeng_fanzhi/resolve/main/dc0514e280aa3d3133c318228daedfd.png"> |
jmparejaz/TFqa-finetuned-distilbert-base-cased | jmparejaz | 2023-01-18T02:54:01Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
] | null | 2023-01-18T02:53:34Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | AdamWeightDecay |
| learning_rate.class_name | WarmUp |
| learning_rate.config.initial_learning_rate | 0.0002 |
| learning_rate.config.decay_schedule_fn.class_name | PolynomialDecay |
| learning_rate.config.decay_schedule_fn.config.initial_learning_rate | 0.0002 |
| learning_rate.config.decay_schedule_fn.config.decay_steps | 22180 |
| learning_rate.config.decay_schedule_fn.config.end_learning_rate | 0.0 |
| learning_rate.config.decay_schedule_fn.config.power | 1.0 |
| learning_rate.config.decay_schedule_fn.config.cycle | False |
| learning_rate.config.decay_schedule_fn.config.name | None |
| learning_rate.config.decay_schedule_fn.__passive_serialization__ | True |
| learning_rate.config.warmup_steps | 2 |
| learning_rate.config.power | 1.0 |
| learning_rate.config.name | None |
| decay | 0.0 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-08 |
| amsgrad | False |
| weight_decay_rate | 0.01 |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
neilthematic/setfit-ethos-multilabel-example | neilthematic | 2023-01-18T02:47:00Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-01-18T02:44:53Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 12 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 12,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jmparejaz/qa_bert_finetuned-squad | jmparejaz | 2023-01-18T02:34:13Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-01-14T21:50:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: qa_bert_finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa_bert_finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.157358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2206 | 1.0 | 5533 | 1.160322 |
| 0.9452 | 2.0 | 11066 | 1.121690 |
| 0.773 | 3.0 | 16599 | 1.157358 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
peter1133/ppo-SnowballTarget | peter1133 | 2023-01-18T02:29:49Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-01-18T02:29:43Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: peter1133/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
E1-Productions/rstiah | E1-Productions | 2023-01-18T01:45:55Z | 0 | 0 | null | [
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-01-18T01:45:16Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: rstiah
---
### rstiah Dreambooth model trained by Enigmatic1 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-768 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
rstiah (use that on your prompt)

|
RichVip/Cute_RichStyle_2 | RichVip | 2023-01-18T01:37:44Z | 0 | 10 | null | [
"cartoon",
"CHARACTER",
"BABY",
"BABIES",
"LITTLE",
"SD1.5",
"DIGITAL ART",
"CUTE",
"MIDJOURNEY",
"DOLLS",
"license:apache-2.0",
"region:us"
] | null | 2023-01-17T18:44:49Z | ---
license: apache-2.0
tags:
- cartoon
- CHARACTER
- BABY
- BABIES
- LITTLE
- SD1.5
- DIGITAL ART
- CUTE
- MIDJOURNEY
- DOLLS
---
# Cute RichStyle - 768x768
Model trained in SD 2.1 with photos generated with Midjourney, created to generate people, animals/creatures...
You can also make objects... landscapes, etc, but maybe you need more tries:
- 30 steps - 7cfg
- euler a,ddim, dpm++sde...
- you can use different resolutions, you can generate interesting things
Characters rendered with the model:
.jpg)
.jpg)
**TOKEN**: cbzbb, cbzbb style, cbzbb style of _____ , you can put the token , (it is not required) but it is better to put it. Many times the token between () works better
possible positives: cute, little, baby, beautiful, fantasy art, devian art, trending artstation, digital art, detailed, cute, realistic, humanoide, character, tiny, film still of "____" , cinematic shot , "__" environment, beautiful landspace of _____, cinematic portrait of ______, cute character as a "_"....
If you want to make it less realistic, put the word: character in positive prompt
most important negatives (not mandatory but they help a lot) : pencil draw, bad photo, bad draw
other possible negatives: cartoon, woman, man, person, people, character, super hero, iron man, baby, anime...
((When you generate the photo, there are times when it tries to create a person/character, that's why the negative character prompts etc...))
- landscape prompts better between ( ) or more parentheses, although it is not always necessary
- you can use other styles, removing the "cbzbb" token and adding pencil draw, lego style.. watercolor etc etc, it doesn't make the exact photo style with which I trained it but they look great too!!
- Most of the photos are daytime, to create nights it once worked with:
- positive: (dark), (black sky) (dark sky) etc etc
- negative: (blue day), (day light), (day) (sun) etc etc
- To increase quality: send the photo that you like the most to img2img (30-steps), 0.60-80, generate 4 photos, choose one or repeat (with less donoising to make it look more like the original, or more to make it change more ), resend via img2img (you can raise the ratio/aspect of the image a bit), lower the denoising to 0.40-0.50, generate 2/4 images, choose the one you like the most and have more detail, send to img2img uploading the photo scale (same ratio/aspect,) and at 0.15-0.30 50 steps, generate 1 photo, if you want you can continue rescaling it for more detail and more resolution
- Change person/character in the image: if you like the photo but want to change the character, send a photo to img2img, change the name of the character or person or animal and between 0.7-1 denoising
**Prompt examples:**
cbzbb style of a pennywise
michael jackson, cbzbb, detailed, fantasy,super cute, trending on artstation
cbzbb style of angry baby groot
cute panda reading a book, cbzbb style
## ENJOY !!!!
The creations of the images are absolutely yours! But if you can share them with me on Twitter or Instagram or reddit, anywhere , I'd LOVE to SEE what you can do with the model!
- **Twitter:** @RichViip
- **Instagram**: richviip
- **Reddit:** Richviip
Thank you for the support and great help of ALL the people on Patricio's discord, who were at every moment of the creation of the model giving their opinion on more than 15 different types of models and making my head hurt less!
Social media of Patricio, follow him!!
- **Youtube:** patricio-fernandez
- **Twitter:** patriciofernanf |
henryscheible/eval_masked_102_qnli | henryscheible | 2023-01-18T01:31:57Z | 0 | 0 | null | [
"pytorch",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2023-01-18T00:15:54Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: eval_masked_102_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.90426505583013
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eval_masked_102_qnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5033
- Accuracy: 0.9043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
henryscheible/eval_masked_102_sst2 | henryscheible | 2023-01-18T01:06:27Z | 0 | 0 | null | [
"pytorch",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2023-01-18T00:15:36Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: eval_masked_102_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.926605504587156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eval_masked_102_sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3561
- Accuracy: 0.9266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
BlakeMartin/BeanDetector | BlakeMartin | 2023-01-18T01:00:59Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"image-classification",
"region:us"
] | image-classification | 2023-01-18T00:58:00Z | ---
library_name: diffusers
pipeline_tag: image-classification
--- |
jpopham91/a2c-AntBulletEnv-v0 | jpopham91 | 2023-01-18T00:54:36Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T00:53:25Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1593.92 +/- 487.94
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nolanaatama/aeros | nolanaatama | 2023-01-18T00:50:14Z | 0 | 4 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-18T00:40:59Z | ---
license: creativeml-openrail-m
---
|
Abdo96/whisper-small-ar | Abdo96 | 2023-01-18T00:41:27Z | 13 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ar",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-01-14T21:39:56Z | ---
language:
- ar
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Ar - Abdallah Elbohy
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: ar
split: test
args: 'config: ar, split: test'
metrics:
- name: Wer
type: wer
value: 49.80809842989625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Ar - Abdallah Elbohy
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset For short transcription 30s but for long transcription it has some limitations and challenges.
It achieves the following results on the evaluation set:
- Loss: 0.3791
- Wer: 49.8081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0972 | 0.57 | 1000 | 0.3791 | 49.8081 |
| 0.0978 | 1.14 | 2000 | 0.3791 | 49.8081 |
| 0.0986 | 1.71 | 3000 | 0.3791 | 49.8081 |
| 0.1055 | 2.28 | 4000 | 0.3791 | 49.8081 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
DanielPaull/q-FrozenLake-v1-4x4-noSlippery | DanielPaull | 2023-01-18T00:33:37Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-18T00:33:32Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.23 +/- 0.42
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="DanielPaull/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
henryscheible/eval_masked_102_cola | henryscheible | 2023-01-18T00:21:14Z | 0 | 0 | null | [
"pytorch",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2023-01-18T00:15:29Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: eval_masked_102_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5988647643057969
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eval_masked_102_cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6601
- Matthews Correlation: 0.5989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
andrewljohnson/segformer-b5-finetuned-magic-cards-230117-2 | andrewljohnson | 2023-01-17T23:29:03Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2023-01-17T23:27:20Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b5-finetuned-magic-cards-230117-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b5-finetuned-magic-cards-230117-2
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the andrewljohnson/magic_cards dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0491
- Mean Iou: 0.6649
- Mean Accuracy: 0.9974
- Overall Accuracy: 0.9972
- Accuracy Unlabeled: nan
- Accuracy Front: 0.9990
- Accuracy Back: 0.9957
- Iou Unlabeled: 0.0
- Iou Front: 0.9990
- Iou Back: 0.9957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Front | Accuracy Back | Iou Unlabeled | Iou Front | Iou Back |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:--------------:|:-------------:|:-------------:|:---------:|:--------:|
| 0.5968 | 0.33 | 20 | 0.4422 | 0.6366 | 0.9701 | 0.9690 | nan | 0.9812 | 0.9590 | 0.0 | 0.9507 | 0.9590 |
| 0.8955 | 0.66 | 40 | 0.2353 | 0.6496 | 0.9819 | 0.9807 | nan | 0.9944 | 0.9695 | 0.0 | 0.9792 | 0.9695 |
| 0.1269 | 0.98 | 60 | 0.1739 | 0.6566 | 0.9922 | 0.9916 | nan | 0.9979 | 0.9866 | 0.0 | 0.9832 | 0.9866 |
| 0.7629 | 1.31 | 80 | 0.1664 | 0.6561 | 0.9915 | 0.9909 | nan | 0.9975 | 0.9856 | 0.0 | 0.9826 | 0.9856 |
| 0.106 | 1.64 | 100 | 0.1005 | 0.6641 | 0.9968 | 0.9967 | nan | 0.9978 | 0.9959 | 0.0 | 0.9966 | 0.9959 |
| 0.3278 | 1.97 | 120 | 0.0577 | 0.6632 | 0.9948 | 0.9947 | nan | 0.9963 | 0.9934 | 0.0 | 0.9963 | 0.9934 |
| 0.061 | 2.3 | 140 | 0.0655 | 0.6642 | 0.9963 | 0.9962 | nan | 0.9972 | 0.9953 | 0.0 | 0.9972 | 0.9953 |
| 0.0766 | 2.62 | 160 | 0.0470 | 0.6635 | 0.9953 | 0.9954 | nan | 0.9940 | 0.9966 | 0.0 | 0.9940 | 0.9966 |
| 0.0664 | 2.95 | 180 | 0.0436 | 0.6617 | 0.9926 | 0.9931 | nan | 0.9877 | 0.9975 | 0.0 | 0.9877 | 0.9975 |
| 0.0655 | 3.28 | 200 | 0.0632 | 0.6649 | 0.9973 | 0.9971 | nan | 0.9994 | 0.9953 | 0.0 | 0.9994 | 0.9953 |
| 0.0356 | 3.61 | 220 | 0.0755 | 0.6661 | 0.9991 | 0.9991 | nan | 0.9992 | 0.9991 | 0.0 | 0.9992 | 0.9991 |
| 0.0516 | 3.93 | 240 | 0.0470 | 0.6643 | 0.9965 | 0.9963 | nan | 0.9987 | 0.9943 | 0.0 | 0.9987 | 0.9943 |
| 0.0517 | 4.26 | 260 | 0.0481 | 0.6645 | 0.9967 | 0.9965 | nan | 0.9989 | 0.9945 | 0.0 | 0.9989 | 0.9945 |
| 0.1886 | 4.59 | 280 | 0.0823 | 0.6659 | 0.9988 | 0.9987 | nan | 0.9999 | 0.9977 | 0.0 | 0.9999 | 0.9977 |
| 0.0453 | 4.92 | 300 | 0.0491 | 0.6649 | 0.9974 | 0.9972 | nan | 0.9990 | 0.9957 | 0.0 | 0.9990 | 0.9957 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.0.dev0
|
SALT-NLP/pfadapter-roberta-base-cola-combined-value | SALT-NLP | 2023-01-17T22:59:40Z | 1 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"en",
"dataset:glue",
"region:us"
] | null | 2022-09-15T21:44:47Z | ---
tags:
- adapter-transformers
- roberta
datasets:
- glue
language:
- en
---
# Adapter `SALT-NLP/pfadapter-roberta-base-cola-combined-value` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [glue](https://huggingface.co/datasets/glue/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("SALT-NLP/pfadapter-roberta-base-cola-combined-value", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
robkayinto/distilbert-base-uncased-finetuned-clinc | robkayinto | 2023-01-17T22:34:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-17T17:02:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9151612903225806
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7773
- Accuracy: 0.9152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.293 | 1.0 | 318 | 3.2831 | 0.7432 |
| 2.6252 | 2.0 | 636 | 1.8743 | 0.8306 |
| 1.5406 | 3.0 | 954 | 1.1576 | 0.8939 |
| 1.0105 | 4.0 | 1272 | 0.8626 | 0.9094 |
| 0.7962 | 5.0 | 1590 | 0.7773 | 0.9152 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
andrewljohnson/segformer-b5-finetuned-magic-cards-230117 | andrewljohnson | 2023-01-17T22:31:11Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2023-01-17T19:49:24Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b5-finetuned-magic-cards-230117
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b5-finetuned-magic-cards-230117
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the andrewljohnson/magic_cards dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2096
- Mean Iou: 0.6629
- Mean Accuracy: 0.9944
- Overall Accuracy: 0.9944
- Accuracy Unlabeled: nan
- Accuracy Front: 0.9997
- Accuracy Back: 0.9891
- Iou Unlabeled: 0.0
- Iou Front: 0.9997
- Iou Back: 0.9891
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Front | Accuracy Back | Iou Unlabeled | Iou Front | Iou Back |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:--------------:|:-------------:|:-------------:|:---------:|:--------:|
| 0.496 | 0.74 | 20 | 0.4441 | 0.6552 | 0.9838 | 0.9838 | nan | 0.9786 | 0.9890 | 0.0 | 0.9786 | 0.9869 |
| 0.1693 | 1.48 | 40 | 0.4098 | 0.6597 | 0.9897 | 0.9897 | nan | 0.9943 | 0.9851 | 0.0 | 0.9943 | 0.9849 |
| 0.1172 | 2.22 | 60 | 0.2734 | 0.6582 | 0.9874 | 0.9874 | nan | 0.9977 | 0.9770 | 0.0 | 0.9977 | 0.9770 |
| 0.1335 | 2.96 | 80 | 0.2637 | 0.6609 | 0.9914 | 0.9914 | nan | 0.9959 | 0.9869 | 0.0 | 0.9959 | 0.9869 |
| 0.0781 | 3.7 | 100 | 0.5178 | 0.6644 | 0.9966 | 0.9966 | nan | 0.9998 | 0.9933 | 0.0 | 0.9998 | 0.9933 |
| 0.1302 | 4.44 | 120 | 0.2753 | 0.6652 | 0.9978 | 0.9978 | nan | 0.9993 | 0.9962 | 0.0 | 0.9993 | 0.9962 |
| 0.0688 | 5.19 | 140 | 0.1458 | 0.6618 | 0.9926 | 0.9926 | nan | 0.9950 | 0.9903 | 0.0 | 0.9950 | 0.9903 |
| 0.0866 | 5.93 | 160 | 0.1763 | 0.6636 | 0.9954 | 0.9954 | nan | 0.9962 | 0.9946 | 0.0 | 0.9962 | 0.9946 |
| 0.0525 | 6.67 | 180 | 0.1812 | 0.6627 | 0.9941 | 0.9941 | nan | 0.9988 | 0.9895 | 0.0 | 0.9988 | 0.9895 |
| 0.0679 | 7.41 | 200 | 0.2246 | 0.6625 | 0.9937 | 0.9937 | nan | 0.9990 | 0.9884 | 0.0 | 0.9990 | 0.9884 |
| 0.0424 | 8.15 | 220 | 0.2079 | 0.6623 | 0.9934 | 0.9935 | nan | 0.9996 | 0.9873 | 0.0 | 0.9996 | 0.9873 |
| 0.0349 | 8.89 | 240 | 0.1559 | 0.6626 | 0.9939 | 0.9940 | nan | 0.9987 | 0.9892 | 0.0 | 0.9987 | 0.9892 |
| 0.0357 | 9.63 | 260 | 0.2096 | 0.6629 | 0.9944 | 0.9944 | nan | 0.9997 | 0.9891 | 0.0 | 0.9997 | 0.9891 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.0.dev0
|
Arch4ngel/a2c-AntBulletEnv-v0 | Arch4ngel | 2023-01-17T22:27:00Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-17T22:25:48Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1654.94 +/- 102.31
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
atorre/a2c-PandaReachDense-v2 | atorre | 2023-01-17T22:14:37Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-17T22:11:52Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: a2c
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.61 +/- 0.33
name: mean_reward
verified: false
---
# **a2c** Agent playing **PandaReachDense-v2**
This is a trained model of a **a2c** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Bhaclash/ppo-LunarLander-v2 | Bhaclash | 2023-01-17T22:13:03Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-17T20:39:15Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.83 +/- 21.03
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Geandro/001 | Geandro | 2023-01-17T22:06:24Z | 0 | 0 | null | [
"region:us"
] | null | 2023-01-17T22:06:11Z | git lfs install
git clone https://huggingface.co/WarriorMama777/OrangeMixs
|
atorre/a2c-AntBulletEnv-v0 | atorre | 2023-01-17T22:02:25Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-17T22:01:07Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: a2c
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 2188.68 +/- 82.59
name: mean_reward
verified: false
---
# **a2c** Agent playing **AntBulletEnv-v0**
This is a trained model of a **a2c** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
asubiabre/ppo-LunarLander-v2 | asubiabre | 2023-01-17T21:46:02Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-10T15:44:42Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 236.58 +/- 35.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DiegoD616/ppo-SnowballTarget | DiegoD616 | 2023-01-17T21:44:37Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-01-17T21:44:31Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: DiegoD616/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jastorga/ppo-LunarLander-v2 | jastorga | 2023-01-17T21:40:47Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-17T21:40:21Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 216.82 +/- 77.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Unterwexi/a2c-PandaReachDense-v2 | Unterwexi | 2023-01-17T21:38:24Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-17T21:36:00Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.72 +/- 0.78
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tmilushev/a2c-PandaReachDense-v2 | tmilushev | 2023-01-17T21:35:21Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-17T21:32:58Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.52 +/- 0.19
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jhaochenz/finetuned_distilgpt2_sst2_negation0.01_pretrainedFalse_epochs6 | jhaochenz | 2023-01-17T21:35:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:sst2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-01-17T08:09:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sst2
model-index:
- name: finetuned_distilgpt2_sst2_negation0.01_pretrainedFalse_epochs6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_distilgpt2_sst2_negation0.01_pretrainedFalse_epochs6
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6761 | 1.0 | 1323 | 3.2509 |
| 2.4709 | 2.0 | 2646 | 3.2494 |
| 2.3828 | 3.0 | 3969 | 3.2751 |
| 2.2987 | 4.0 | 5292 | 3.3006 |
| 2.2483 | 5.0 | 6615 | 3.3306 |
| 2.2195 | 6.0 | 7938 | 3.3386 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
jhaochenz/finetuned_distilgpt2_sst2_negation0.01_pretrainedFalse_epochs3 | jhaochenz | 2023-01-17T21:14:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:sst2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-01-17T08:10:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sst2
model-index:
- name: finetuned_distilgpt2_sst2_negation0.01_pretrainedFalse_epochs3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_distilgpt2_sst2_negation0.01_pretrainedFalse_epochs3
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6821 | 1.0 | 1323 | 3.2535 |
| 2.5045 | 2.0 | 2646 | 3.2502 |
| 2.4511 | 3.0 | 3969 | 3.2579 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
nepp1d0/prot_bert_classification_finetuned_karolina_es_10e | nepp1d0 | 2023-01-17T20:58:18Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-10T15:57:51Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: prot_bert_classification_finetuned_karolina_es_10e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prot_bert_classification_finetuned_karolina_es_10e
This model is a fine-tuned version of [nepp1d0/prot_bert-finetuned-smiles-bindingDB](https://huggingface.co/nepp1d0/prot_bert-finetuned-smiles-bindingDB) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6840
- Accuracy: 0.88
- F1: 0.9362
- Precision: 1.0
- Recall: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 4 | 0.7082 | 0.02 | 0.0392 | 1.0 | 0.02 |
| No log | 2.0 | 8 | 0.7073 | 0.04 | 0.0769 | 1.0 | 0.04 |
| No log | 3.0 | 12 | 0.7060 | 0.04 | 0.0769 | 1.0 | 0.04 |
| No log | 4.0 | 16 | 0.7047 | 0.04 | 0.0769 | 1.0 | 0.04 |
| No log | 5.0 | 20 | 0.7034 | 0.08 | 0.1481 | 1.0 | 0.08 |
| No log | 6.0 | 24 | 0.7008 | 0.22 | 0.3607 | 1.0 | 0.22 |
| No log | 7.0 | 28 | 0.6976 | 0.22 | 0.3607 | 1.0 | 0.22 |
| No log | 8.0 | 32 | 0.6933 | 0.3 | 0.4615 | 1.0 | 0.3 |
| No log | 9.0 | 36 | 0.6893 | 0.6 | 0.7500 | 1.0 | 0.6 |
| No log | 10.0 | 40 | 0.6840 | 0.88 | 0.9362 | 1.0 | 0.88 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.11.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
etweedy/tessa | etweedy | 2023-01-17T20:55:28Z | 9 | 0 | diffusers | [
"diffusers",
"text-to-image",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-31T18:03:41Z | ---
license: apache-2.0
tags:
- text-to-image
---
### Tessa on Stable Diffusion v1.5 via Dreambooth
####
This is a Stable Diffusion (v1.5) model fine-tuned on the concept of my dog, Tessa, usiung the Dreambooth method: https://dreambooth.github.io/
To use the model, try modifying the basic prompt: \"**a photo of \<tessa\> dog**\".
The model was fine-tuned for 1200 steps with a learning rate of 2e-6, using 15 images of Tessa and 500 class regularization images. The class images were generated in advance by Stable Diffusion v1.5 using the prompt "Photo of a dog".
Here are the images of Tessa used for training this concept:














 |
tmilushev/a2c-AntBulletEnv-v0 | tmilushev | 2023-01-17T20:36:26Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-17T20:35:20Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1812.99 +/- 151.52
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Unterwexi/a2c-AntBulletEnv-v0 | Unterwexi | 2023-01-17T20:33:51Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-17T20:32:41Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1394.32 +/- 75.71
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
facebook/DiT-XL-2-256 | facebook | 2023-01-17T20:29:53Z | 5,061 | 16 | diffusers | [
"diffusers",
"license:cc-by-nc-4.0",
"diffusers:DiTPipeline",
"region:us"
] | null | 2023-01-17T20:25:12Z | ---
license: cc-by-nc-4.0
---
# Scalable Diffusion Models with Transformers (DiT)
## Abstract
We train latent diffusion models, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops---through increased transformer depth/width or increased number of input tokens---consistently have lower FID. In addition to good scalability properties, our DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512×512 and 256×256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter.
|
kashif/DiT-XL-2-256 | kashif | 2023-01-17T20:05:44Z | 5 | 0 | diffusers | [
"diffusers",
"license:cc-by-nc-4.0",
"diffusers:DiTPipeline",
"region:us"
] | null | 2022-12-23T12:03:39Z | ---
license: cc-by-nc-4.0
---
# Scalable Diffusion Models with Transformers (DiT)
## Abstract
We train latent diffusion models, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops---through increased transformer depth/width or increased number of input tokens---consistently have lower FID. In addition to good scalability properties, our DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512×512 and 256×256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter.
|
RichVip/Cute_RichStyle_1.5 | RichVip | 2023-01-17T20:04:22Z | 0 | 8 | null | [
"BABY",
"BABIES",
"LITTLE",
"SD2.1",
"DIGITAL ART",
"CUTE",
"MIDJOURNEY",
"DOLLS",
"CHARACTER",
"CARTOON",
"license:apache-2.0",
"region:us"
] | null | 2023-01-17T18:42:44Z | ---
license: apache-2.0
tags:
- BABY
- BABIES
- LITTLE
- SD2.1
- DIGITAL ART
- CUTE
- MIDJOURNEY
- DOLLS
- CHARACTER
- CARTOON
---
# Cute RichStyle - 512x512
Model trained in SD 1.5 with photos generated with Midjourney, created to generate people, animals/creatures...
You can also make objects... landscapes, etc, but maybe you need more tries:
- 30 steps - 7cfg
- euler a,ddim, dpm++sde...
- you can use different resolutions, you can generate interesting things
Characters rendered with the model:
.jpg)
.jpg)
**TOKEN**: cbzbb, cbzbb style, cbzbb style of _____ , you can put the token , (it is not required) but it is better to put it. Many times the token between () works better
possible positives: cute, little, baby, beautiful, fantasy art, devian art, trending artstation, digital art, detailed, cute, realistic, humanoide, character, tiny, film still of "____" , cinematic shot, "__" environment, beautiful landspace of _____, cinematic portrait of ______, cute character as a "_"....
most important negatives (not mandatory but they help a lot) : pencil draw, bad photo, bad draw
other possible negatives: cartoon, woman, man, person, people, character, super hero, iron man, baby, anime...
((When you generate the photo, there are times when it tries to create a person/character, that's why the negative character prompts etc...))
- landscape prompts better between ( ) or more parentheses, although it is not always necessary
- you can use other styles, removing the "cbzbb" token and adding pencil draw, lego style.. watercolor etc etc, it doesn't make the exact photo style with which I trained it but they look great too!!
- Most of the photos are daytime, to create nights it once worked with:
- positive: (dark), (black sky) (dark sky) etc etc
- negative: (blue day), (day light), (day) (sun) etc etc
- To increase quality: send the photo that you like the most to img2img (30-steps), 0.60-80, generate 4 photos, choose one or repeat (with less donoising to make it look more like the original, or more to make it change more ), resend via img2img (you can raise the ratio/aspect of the image a bit), lower the denoising to 0.40-0.50, generate 2/4 images, choose the one you like the most and have more detail, send to img2img uploading the photo scale (same ratio/aspect,) and at 0.15-0.30 50 steps, generate 1 photo, if you want you can continue rescaling it for more detail and more resolution
- Change person/character in the image: if you like the photo but want to change the character, send a photo to img2img, change the name of the character or person or animal and between 0.7-1 denoising
**Prompt examples:**
cbzbb style of a pennywise
michael jackson, cbzbb, detailed, fantasy,super cute, trending on artstation
cbzbb style of angry baby groot
cute panda reading a book, cbzbb style
## ENJOY !!!!
The creations of the images are absolutely yours! But if you can share them with me on Twitter or Instagram or reddit, anywhere , I'd LOVE to SEE what you can do with the model!
- **Twitter:** @RichViip
- **Instagram**: richviip
- **Reddit:** Richviip
Thank you for the support and great help of ALL the people on Patricio's discord, who were at every moment of the creation of the model giving their opinion on more than 15 different types of models and making my head hurt less!
Social media of Patricio, follow him!!
- **Youtube:** patricio-fernandez
- **Twitter:** patriciofernanf |
saikiranp/a2c-PandaReachDense-v2 | saikiranp | 2023-01-17T20:02:27Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-17T19:12:27Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.96 +/- 0.25
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Tomor0720/deberta-large-finetuned-sst2 | Tomor0720 | 2023-01-17T20:01:16Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-17T16:30:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: deberta-large-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: sst2
split: train
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9495412844036697
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-large-finetuned-sst2
This model is a fine-tuned version of [microsoft/deberta-large](https://huggingface.co/microsoft/deberta-large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2159
- Accuracy: 0.9495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1214 | 1.0 | 4210 | 0.1969 | 0.9438 |
| 0.067 | 2.0 | 8420 | 0.2159 | 0.9495 |
| 0.0405 | 3.0 | 12630 | 0.2159 | 0.9495 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
BobMcDear/convnext_base_clip_320_laiona_augreg | BobMcDear | 2023-01-17T19:51:30Z | 0 | 0 | null | [
"region:us"
] | null | 2023-01-17T19:49:23Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnext_base_clip_laiona | BobMcDear | 2023-01-17T19:51:18Z | 0 | 0 | null | [
"region:us"
] | null | 2023-01-17T19:49:24Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnext_base_clip_laion2b | BobMcDear | 2023-01-17T19:51:11Z | 0 | 0 | null | [
"region:us"
] | null | 2023-01-17T19:49:22Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnext_base_clip_320_laiona | BobMcDear | 2023-01-17T19:51:05Z | 0 | 0 | null | [
"region:us"
] | null | 2023-01-17T19:49:21Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
Alassea/middle-dutch-NER_passAgg_2 | Alassea | 2023-01-17T19:34:00Z | 0 | 0 | sklearn | [
"sklearn",
"skops",
"tabular-classification",
"region:us"
] | tabular-classification | 2023-01-17T19:25:54Z | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
model_file: model.pkl
widget:
structuredData:
word:
- lathem
- meer
- slaen
---
# Model description
Middle Dutch NER with PassiveAgressiveClassifier
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
TESTING
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|---------------------------|----------------------------------------------------------------------|
| memory | |
| steps | [('vectorizer', CountVectorizer()), ('classifier', MultinomialNB())] |
| verbose | False |
| vectorizer | CountVectorizer() |
| classifier | MultinomialNB() |
| vectorizer__analyzer | word |
| vectorizer__binary | False |
| vectorizer__decode_error | strict |
| vectorizer__dtype | <class 'numpy.int64'> |
| vectorizer__encoding | utf-8 |
| vectorizer__input | content |
| vectorizer__lowercase | True |
| vectorizer__max_df | 1.0 |
| vectorizer__max_features | |
| vectorizer__min_df | 1 |
| vectorizer__ngram_range | (1, 1) |
| vectorizer__preprocessor | |
| vectorizer__stop_words | |
| vectorizer__strip_accents | |
| vectorizer__token_pattern | (?u)\b\w\w+\b |
| vectorizer__tokenizer | |
| vectorizer__vocabulary | |
| classifier__alpha | 1.0 |
| classifier__class_prior | |
| classifier__fit_prior | True |
</details>
### Model Plot
The model plot is below.
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('vectorizer', CountVectorizer()),('classifier', MultinomialNB())])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('vectorizer', CountVectorizer()),('classifier', MultinomialNB())])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">CountVectorizer</label><div class="sk-toggleable__content"><pre>CountVectorizer()</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">MultinomialNB</label><div class="sk-toggleable__content"><pre>MultinomialNB()</pre></div></div></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|-------------------------|----------|
| accuracy including 'O' | 0.905322 |
| f1 score including 'O | 0.905322 |
| precision excluding 'O' | 0.892857 |
| recall excluding 'O' | 0.404732 |
| f1 excluding 'O' | 0.556984 |
### Confusion Matrix

# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
Alassea TEST
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
**BibTeX**
```
@inproceedings{...,year={2022}}
```
|
BobMcDear/vit_base_clip_patch32_224_openai | BobMcDear | 2023-01-17T18:59:49Z | 0 | 0 | null | [
"region:us"
] | null | 2023-01-17T17:37:25Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/vit_huge_clip_patch14_224_laion2b | BobMcDear | 2023-01-17T18:59:35Z | 0 | 0 | null | [
"region:us"
] | null | 2023-01-17T17:37:24Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/vit_giant_clip_patch14_224_laion2b | BobMcDear | 2023-01-17T18:59:28Z | 0 | 0 | null | [
"region:us"
] | null | 2023-01-17T17:37:23Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/vit_base_clip_patch32_224_laion2b | BobMcDear | 2023-01-17T18:59:15Z | 0 | 0 | null | [
"region:us"
] | null | 2023-01-17T17:37:20Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/vit_large_clip_patch14_224_laion2b | BobMcDear | 2023-01-17T18:59:01Z | 0 | 0 | null | [
"region:us"
] | null | 2023-01-17T17:37:18Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
Subsets and Splits