modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 06:27:54
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 06:27:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
dhmeltzer/ppo-pyramid | dhmeltzer | 2023-02-01T04:47:43Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-02-01T04:43:06Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: dhmeltzer/ppo-pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nlp04/sentiment_roberta_large_with_diary | nlp04 | 2023-02-01T04:20:53Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-01T02:39:43Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sentiment_roberta_large_with_diary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_roberta_large_with_diary
This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5671
- Micro f1 score: 80.0000
- Auprc: 77.0282
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro f1 score | Auprc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:-------:|:--------:|
| 1.6198 | 0.13 | 100 | 1.3872 | 48.9362 | 55.5743 | 0.4894 |
| 0.6603 | 0.26 | 200 | 0.9249 | 65.9574 | 62.8759 | 0.6596 |
| 0.5387 | 0.4 | 300 | 0.7262 | 73.1915 | 71.1936 | 0.7319 |
| 0.4801 | 0.53 | 400 | 0.6623 | 74.0426 | 68.8606 | 0.7404 |
| 0.4597 | 0.66 | 500 | 0.6092 | 76.1702 | 75.7346 | 0.7617 |
| 0.4217 | 0.79 | 600 | 0.5929 | 78.7234 | 76.8709 | 0.7872 |
| 0.4148 | 0.93 | 700 | 0.5671 | 80.0000 | 77.0282 | 0.8 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
BachNgoH/q-FrozenLake-v1-4x4-noSlippery | BachNgoH | 2023-02-01T04:04:45Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-01T04:04:43Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="BachNgoH/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gaarsmu/atari_invaders_1M | gaarsmu | 2023-02-01T04:00:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-01T03:59:34Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 390.00 +/- 139.71
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gaarsmu -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gaarsmu -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga gaarsmu
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
irow/dual-bert-IR | irow | 2023-02-01T03:58:59Z | 0 | 0 | null | [
"en",
"dataset:BeIR/msmarco",
"region:us"
]
| null | 2023-01-26T23:55:54Z | ---
datasets:
- BeIR/msmarco
language:
- en
---
This model uses two BERT models fine-tuned on a contrastive learning objective.
One is responsable for short queries, and the other for longer documents that contain the answer to the query.
After encoding many documents, one may perform a nearest neighbor search with the query encoding, to fetch
the most relevant document.
Take a look at inference.py to see how to perform inference. |
sschet/ner-disease-ncbi-bionlp-bc5cdr-pubmed | sschet | 2023-02-01T03:44:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"ner",
"ncbi",
"disease",
"pubmed",
"bioinfomatics",
"en",
"dataset:ncbi-disease",
"dataset:bc5cdr",
"dataset:tner/bc5cdr",
"dataset:commanderstrife/jnlpba",
"dataset:bc2gm_corpus",
"dataset:drAbreu/bc4chemd_ner",
"dataset:linnaeus",
"dataset:chintagunta85/ncbi_disease",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-01T02:52:04Z | ---
language:
- en
tags:
- ner
- ncbi
- disease
- pubmed
- bioinfomatics
license: apache-2.0
datasets:
- ncbi-disease
- bc5cdr
- tner/bc5cdr
- commanderstrife/jnlpba
- bc2gm_corpus
- drAbreu/bc4chemd_ner
- linnaeus
- chintagunta85/ncbi_disease
widget:
- text: "Hepatocyte nuclear factor 4 alpha (HNF4α) is regulated by different promoters to generate two isoforms, one of which functions as a tumor suppressor. Here, the authors reveal that induction of the alternative isoform in hepatocellular carcinoma inhibits the circadian clock by repressing BMAL1, and the reintroduction of BMAL1 prevents HCC tumor growth."
---
# NER to find Gene & Gene products
> The model was trained on ncbi-disease, BC5CDR dataset, pretrained on this [pubmed-pretrained roberta model](/raynardj/roberta-pubmed)
All the labels, the possible token classes.
```json
{"label2id": {
"O": 0,
"Disease":1,
}
}
```
Notice, we removed the 'B-','I-' etc from data label.🗡
## This is the template we suggest for using the model
```python
from transformers import pipeline
PRETRAINED = "raynardj/ner-disease-ncbi-bionlp-bc5cdr-pubmed"
ner = pipeline(task="ner",model=PRETRAINED, tokenizer=PRETRAINED)
ner("Your text", aggregation_strategy="first")
```
And here is to make your output more consecutive ⭐️
```python
import pandas as pd
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(PRETRAINED)
def clean_output(outputs):
results = []
current = []
last_idx = 0
# make to sub group by position
for output in outputs:
if output["index"]-1==last_idx:
current.append(output)
else:
results.append(current)
current = [output, ]
last_idx = output["index"]
if len(current)>0:
results.append(current)
# from tokens to string
strings = []
for c in results:
tokens = []
starts = []
ends = []
for o in c:
tokens.append(o['word'])
starts.append(o['start'])
ends.append(o['end'])
new_str = tokenizer.convert_tokens_to_string(tokens)
if new_str!='':
strings.append(dict(
word=new_str,
start = min(starts),
end = max(ends),
entity = c[0]['entity']
))
return strings
def entity_table(pipeline, **pipeline_kw):
if "aggregation_strategy" not in pipeline_kw:
pipeline_kw["aggregation_strategy"] = "first"
def create_table(text):
return pd.DataFrame(
clean_output(
pipeline(text, **pipeline_kw)
)
)
return create_table
# will return a dataframe
entity_table(ner)(YOUR_VERY_CONTENTFUL_TEXT)
```
> check our NER model on
* [gene and gene products](/raynardj/ner-gene-dna-rna-jnlpba-pubmed)
* [chemical substance](/raynardj/ner-chemical-bionlp-bc5cdr-pubmed).
* [disease](/raynardj/ner-disease-ncbi-bionlp-bc5cdr-pubmed) |
sschet/scibert_scivocab_uncased-finetuned-ner | sschet | 2023-02-01T03:44:18Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"Named Entity Recognition",
"SciBERT",
"Adverse Effect",
"Drug",
"Medical",
"en",
"dataset:ade_corpus_v2",
"dataset:tner/bc5cdr",
"dataset:commanderstrife/jnlpba",
"dataset:bc2gm_corpus",
"dataset:drAbreu/bc4chemd_ner",
"dataset:linnaeus",
"dataset:chintagunta85/ncbi_disease",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-01T02:41:03Z | ---
language:
- en
tags:
- Named Entity Recognition
- SciBERT
- Adverse Effect
- Drug
- Medical
datasets:
- ade_corpus_v2
- tner/bc5cdr
- commanderstrife/jnlpba
- bc2gm_corpus
- drAbreu/bc4chemd_ner
- linnaeus
- chintagunta85/ncbi_disease
widget:
- text: "Abortion, miscarriage or uterine hemorrhage associated with misoprostol (Cytotec), a labor-inducing drug."
example_title: "Abortion, miscarriage, ..."
- text: "Addiction to many sedatives and analgesics, such as diazepam, morphine, etc."
example_title: "Addiction to many..."
- text: "Birth defects associated with thalidomide"
example_title: "Birth defects associated..."
- text: "Bleeding of the intestine associated with aspirin therapy"
example_title: "Bleeding of the intestine..."
- text: "Cardiovascular disease associated with COX-2 inhibitors (i.e. Vioxx)"
example_title: "Cardiovascular disease..."
---
This is a SciBERT-based model fine-tuned to perform Named Entity Recognition for drug names and adverse drug effects.

This model classifies input tokens into one of five classes:
- `B-DRUG`: beginning of a drug entity
- `I-DRUG`: within a drug entity
- `B-EFFECT`: beginning of an AE entity
- `I-EFFECT`: within an AE entity
- `O`: outside either of the above entities
To get started using this model for inference, simply set up an NER `pipeline` like below:
```python
from transformers import (AutoModelForTokenClassification,
AutoTokenizer,
pipeline,
)
model_checkpoint = "jsylee/scibert_scivocab_uncased-finetuned-ner"
model = AutoModelForTokenClassification.from_pretrained(model_checkpoint, num_labels=5,
id2label={0: 'O', 1: 'B-DRUG', 2: 'I-DRUG', 3: 'B-EFFECT', 4: 'I-EFFECT'}
)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model_pipeline = pipeline(task="ner", model=model, tokenizer=tokenizer)
print( model_pipeline ("Abortion, miscarriage or uterine hemorrhage associated with misoprostol (Cytotec), a labor-inducing drug."))
```
SciBERT: https://huggingface.co/allenai/scibert_scivocab_uncased
Dataset: https://huggingface.co/datasets/ade_corpus_v2
|
sschet/bert-large-uncased_med-ner | sschet | 2023-02-01T03:42:17Z | 30 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"en",
"dataset:tner/bc5cdr",
"dataset:commanderstrife/jnlpba",
"dataset:bc2gm_corpus",
"dataset:drAbreu/bc4chemd_ner",
"dataset:linnaeus",
"dataset:chintagunta85/ncbi_disease",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-01T02:02:56Z | ---
language:
- en
datasets:
- tner/bc5cdr
- commanderstrife/jnlpba
- bc2gm_corpus
- drAbreu/bc4chemd_ner
- linnaeus
- chintagunta85/ncbi_disease
---
A Named Entity Recognition model for medication entities (`medication name`, `dosage`, `duration`, `frequency`, `reason`).
The model has been trained on the i2b2 (now n2c2) dataset for the 2009 - Medication task. Please visit the n2c2 site to request access to the dataset. |
sschet/bern2-ner | sschet | 2023-02-01T03:41:57Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"en",
"dataset:tner/bc5cdr",
"dataset:commanderstrife/jnlpba",
"dataset:bc2gm_corpus",
"dataset:drAbreu/bc4chemd_ner",
"dataset:linnaeus",
"dataset:chintagunta85/ncbi_disease",
"endpoints_compatible",
"region:us"
]
| null | 2023-02-01T01:41:08Z | ---
language:
- en
datasets:
- tner/bc5cdr
- commanderstrife/jnlpba
- bc2gm_corpus
- drAbreu/bc4chemd_ner
- linnaeus
- chintagunta85/ncbi_disease
---
|
sschet/ner-gene-dna-rna-jnlpba-pubmed | sschet | 2023-02-01T03:41:37Z | 9 | 4 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"ner",
"gene",
"protein",
"rna",
"bioinfomatics",
"en",
"dataset:jnlpba",
"dataset:tner/bc5cdr",
"dataset:commanderstrife/jnlpba",
"dataset:bc2gm_corpus",
"dataset:drAbreu/bc4chemd_ner",
"dataset:linnaeus",
"dataset:chintagunta85/ncbi_disease",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-01T01:31:59Z | ---
language:
- en
tags:
- ner
- gene
- protein
- rna
- bioinfomatics
license: apache-2.0
datasets:
- jnlpba
- tner/bc5cdr
- commanderstrife/jnlpba
- bc2gm_corpus
- drAbreu/bc4chemd_ner
- linnaeus
- chintagunta85/ncbi_disease
widget:
- text: "It consists of 25 exons encoding a 1,278-amino acid glycoprotein that is composed of 13 transmembrane domains"
---
# NER to find Gene & Gene products
> The model was trained on jnlpba dataset, pretrained on this [pubmed-pretrained roberta model](/raynardj/roberta-pubmed)
All the labels, the possible token classes.
```json
{"label2id": {
"DNA": 2,
"O": 0,
"RNA": 5,
"cell_line": 4,
"cell_type": 3,
"protein": 1
}
}
```
Notice, we removed the 'B-','I-' etc from data label.🗡
## This is the template we suggest for using the model
```python
from transformers import pipeline
PRETRAINED = "raynardj/ner-gene-dna-rna-jnlpba-pubmed"
ner = pipeline(task="ner",model=PRETRAINED, tokenizer=PRETRAINED)
ner("Your text", aggregation_strategy="first")
```
And here is to make your output more consecutive ⭐️
```python
import pandas as pd
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(PRETRAINED)
def clean_output(outputs):
results = []
current = []
last_idx = 0
# make to sub group by position
for output in outputs:
if output["index"]-1==last_idx:
current.append(output)
else:
results.append(current)
current = [output, ]
last_idx = output["index"]
if len(current)>0:
results.append(current)
# from tokens to string
strings = []
for c in results:
tokens = []
starts = []
ends = []
for o in c:
tokens.append(o['word'])
starts.append(o['start'])
ends.append(o['end'])
new_str = tokenizer.convert_tokens_to_string(tokens)
if new_str!='':
strings.append(dict(
word=new_str,
start = min(starts),
end = max(ends),
entity = c[0]['entity']
))
return strings
def entity_table(pipeline, **pipeline_kw):
if "aggregation_strategy" not in pipeline_kw:
pipeline_kw["aggregation_strategy"] = "first"
def create_table(text):
return pd.DataFrame(
clean_output(
pipeline(text, **pipeline_kw)
)
)
return create_table
# will return a dataframe
entity_table(ner)(YOUR_VERY_CONTENTFUL_TEXT)
```
> check our NER model on
* [gene and gene products](/raynardj/ner-gene-dna-rna-jnlpba-pubmed)
* [chemical substance](/raynardj/ner-chemical-bionlp-bc5cdr-pubmed).
* [disease](/raynardj/ner-disease-ncbi-bionlp-bc5cdr-pubmed) |
sschet/biobert_genetic_ner | sschet | 2023-02-01T03:40:52Z | 8 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"Biomedical",
"Genetics",
"en",
"dataset:JNLPBA",
"dataset:BC2GM",
"dataset:tner/bc5cdr",
"dataset:commanderstrife/jnlpba",
"dataset:bc2gm_corpus",
"dataset:drAbreu/bc4chemd_ner",
"dataset:linnaeus",
"dataset:chintagunta85/ncbi_disease",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-01T01:09:47Z | ---
language: "en"
license: apache-2.0
tags:
- token-classification
- NER
- Biomedical
- Genetics
datasets:
- JNLPBA
- BC2GM
- tner/bc5cdr
- commanderstrife/jnlpba
- bc2gm_corpus
- drAbreu/bc4chemd_ner
- linnaeus
- chintagunta85/ncbi_disease
---
BioBERT model fine-tuned in NER task with JNLPBA and BC2GM corpus for genetic class entities.
This was fine-tuned in order to use it in a BioNER/BioNEN system which is available at: https://github.com/librairy/bio-ner |
sschet/biobert_diseases_ner | sschet | 2023-02-01T03:40:32Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"Biomedical",
"Diseases",
"en",
"dataset:BC5CDR-diseases",
"dataset:ncbi_disease",
"dataset:tner/bc5cdr",
"dataset:commanderstrife/jnlpba",
"dataset:bc2gm_corpus",
"dataset:drAbreu/bc4chemd_ner",
"dataset:linnaeus",
"dataset:chintagunta85/ncbi_disease",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-01T00:59:17Z | ---
language: "en"
license: apache-2.0
tags:
- token-classification
- NER
- Biomedical
- Diseases
datasets:
- BC5CDR-diseases
- ncbi_disease
- tner/bc5cdr
- commanderstrife/jnlpba
- bc2gm_corpus
- drAbreu/bc4chemd_ner
- linnaeus
- chintagunta85/ncbi_disease
---
BioBERT model fine-tuned in NER task with BC5CDR-diseases and NCBI-diseases corpus
This was fine-tuned in order to use it in a BioNER/BioNEN system which is available at: https://github.com/librairy/bio-ner |
sschet/biobert_chemical_ner | sschet | 2023-02-01T03:40:10Z | 10 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"NER",
"Biomedical",
"Chemicals",
"en",
"dataset:BC5CDR-chemicals",
"dataset:BC4CHEMD",
"dataset:tner/bc5cdr",
"dataset:commanderstrife/jnlpba",
"dataset:bc2gm_corpus",
"dataset:drAbreu/bc4chemd_ner",
"dataset:linnaeus",
"dataset:chintagunta85/ncbi_disease",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-01T00:37:59Z | ---
language: en
tags:
- token-classification
- NER
- Biomedical
- Chemicals
datasets:
- BC5CDR-chemicals
- BC4CHEMD
- tner/bc5cdr
- commanderstrife/jnlpba
- bc2gm_corpus
- drAbreu/bc4chemd_ner
- linnaeus
- chintagunta85/ncbi_disease
license: apache-2.0
---
BioBERT model fine-tuned in NER task with BC5CDR-chemicals and BC4CHEMD corpus.
This was fine-tuned in order to use it in a BioNER/BioNEN system which is available at: https://github.com/librairy/bio-ner |
RinTrin/climate-tech-0.8605 | RinTrin | 2023-02-01T03:26:24Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-02-01T03:26:16Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 250 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 500,
"warmup_steps": 50,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
tehqikness/manda2 | tehqikness | 2023-02-01T02:59:42Z | 2 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-01T02:56:23Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### manda2 Dreambooth model trained by tehqikness with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
swl-models/lora-pad-colossus-omega | swl-models | 2023-02-01T02:44:47Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-01T02:39:02Z | ---
license: creativeml-openrail-m
---
|
Someman/bird-danphe | Someman | 2023-02-01T02:38:23Z | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"wildcard",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-01-04T16:50:46Z | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- wildcard
widget:
- text: a big dog with danphe face
---
# DreamBooth model for the bird concept trained by Someman on the Someman/danphe dataset.
This is a Stable Diffusion model fine-tuned on the bird concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of bird danphe**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `danphe` images for the wildcard theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('Someman/bird-danphe')
image = pipeline().images[0]
image
```
|
Someman/sitar-psych | Someman | 2023-02-01T02:38:06Z | 1 | 0 | diffusers | [
"diffusers",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"wildcard",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-01-05T06:04:10Z | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- wildcard
widget:
- text: Paint a sitar surrounded by objects and symbols that reflect the cultural
and artistic influences of Salvador Dali
---
# DreamBooth model for the instrument concept trained by Someman on the Someman/Sitar dataset.
This is a Stable Diffusion model fine-tuned on the instrument concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of sitar**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `sitar` images for the wildcard theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('Someman/sitar-psych')
image = pipeline().images[0]
image
```
|
muhtasham/small-mlm-glue-rte-custom-tokenizer-expand-vocab | muhtasham | 2023-02-01T02:07:34Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-02-01T01:42:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: small-mlm-glue-rte-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-rte-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.366 | 1.6 | 500 | 4.5434 |
| 4.5071 | 3.21 | 1000 | 4.1161 |
| 4.0413 | 4.81 | 1500 | 3.7831 |
| 3.6922 | 6.41 | 2000 | 3.4965 |
| 3.4951 | 8.01 | 2500 | 3.3765 |
| 3.2834 | 9.62 | 3000 | 3.1673 |
| 3.1036 | 11.22 | 3500 | 3.1155 |
| 2.9958 | 12.82 | 4000 | 3.0351 |
| 2.8456 | 14.42 | 4500 | 2.9650 |
| 2.6968 | 16.03 | 5000 | 2.9781 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
muhtasham/small-mlm-glue-qqp-from-scratch-custom-tokenizer-expand-vocab | muhtasham | 2023-02-01T01:52:33Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-02-01T01:19:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: small-mlm-glue-qqp-from-scratch-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-qqp-from-scratch-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.3866 | 0.4 | 500 | 6.2514 |
| 6.1051 | 0.8 | 1000 | 5.9205 |
| 5.7552 | 1.2 | 1500 | 5.7346 |
| 5.5838 | 1.6 | 2000 | 5.6074 |
| 5.5191 | 2.0 | 2500 | 5.5317 |
| 5.4095 | 2.4 | 3000 | 5.4497 |
| 5.3784 | 2.8 | 3500 | 5.4039 |
| 5.2816 | 3.2 | 4000 | 5.3645 |
| 5.2523 | 3.6 | 4500 | 5.3406 |
| 5.2021 | 4.0 | 5000 | 5.2614 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
swl-models/toooajk-yagurumagiku-v3-dreambooth | swl-models | 2023-02-01T01:47:41Z | 2,445 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-01T01:26:25Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
---
|
ongkn/smtn_ppo-Huggy | ongkn | 2023-02-01T01:45:38Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-02-01T01:45:31Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ongkn/smtn_ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
swl-models/toooajk-yagurumagiku-v3-hypernet | swl-models | 2023-02-01T01:34:44Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-01T01:27:00Z | ---
license: creativeml-openrail-m
---
|
swl-models/xiaolxl-guofeng-v3 | swl-models | 2023-02-01T01:27:23Z | 5,596 | 14 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-01-31T15:46:10Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
duplicated_from: xiaolxl/GuoFeng3
---
# 介绍 - GuoFeng3
欢迎使用GuoFeng3模型 - (TIP:这个版本的名字进行了微调),这是一个中国华丽古风风格模型,也可以说是一个古风游戏角色模型,具有2.5D的质感。第三代大幅度减少上手难度,增加了场景元素与男性古风人物,除此之外为了模型能更好的适应其它TAG,还增加了其它风格的元素。这一代对脸和手的崩坏有一定的修复,同时素材大小也提高到了最长边1024。
--
Welcome to the GuoFeng3 model - (TIP: the name of this version has been fine-tuned). This is a Chinese gorgeous antique style model, which can also be said to be an antique game character model with a 2.5D texture. The third generation greatly reduces the difficulty of getting started, and adds scene elements and male antique characters. In addition, in order to better adapt the model to other TAGs, other style elements are also added. This generation has repaired the broken face and hands to a certain extent, and the size of the material has also increased to the longest side of 1024.
# 安装教程 - install
1. 将GuoFeng3.ckpt模型放入SD目录 - Put GuoFeng3.ckpt model into SD directory
2. 此模型自带VAE,如果你的程序不支持,请记得选择任意一个VAE文件,否则图形将为灰色 - This model comes with VAE. If your program does not support it, please remember to select any VAE file, otherwise the graphics will be gray
# 如何使用 - How to use
**TIP:经过一天的测试,发现很多人物可能出现红眼问题,可以尝试在负面词添加red eyes。如果色彩艳丽可以尝试降低CFG - After a day of testing, we found that many characters may have red-eye problems. We can try to add red eyes to negative words。Try to reduce CFG if the color is bright**
简单:第三代大幅度减少上手难度 - Simple: the third generation greatly reduces the difficulty of getting started
- **关键词 - key word:**
```
best quality, masterpiece, highres, 1girl,china dress,Beautiful face
```
- **负面词 - Negative words:**
```
NSFW, lowres,bad anatomy,bad hands, text, error, missing fingers,extra digit, fewer digits, cropped, worstquality, low quality, normal quality,jpegartifacts,signature, watermark, username,blurry,bad feet
```
---
高级:如果您还想使图片尽可能更好,请尝试以下配置 - senior:If you also want to make the picture as better as possible, please try the following configuration
- Sampling steps:**50**
- Sampler:**DPM++ SDE Karras or DDIM**
- The size of the picture should be at least **1024** - 图片大小至少1024
- CFG:**4-6**
- **更好的负面词 Better negative words - 感谢群友提供的负面词:**
```
(((simple background))),monochrome ,lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, lowres, bad anatomy, bad hands, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, ugly,pregnant,vore,duplicate,morbid,mut ilated,tran nsexual, hermaphrodite,long neck,mutated hands,poorly drawn hands,poorly drawn face,mutation,deformed,blurry,bad anatomy,bad proportions,malformed limbs,extra limbs,cloned face,disfigured,gross proportions, (((missing arms))),(((missing legs))), (((extra arms))),(((extra legs))),pubic hair, plump,bad legs,error legs,username,blurry,bad feet
```
- **如果想元素更丰富,可以添加下方关键词 - If you want to enrich the elements, you can add the following keywords**
```
Beautiful face,
hair ornament, solo,looking at viewer,smile,closed mouth,lips
china dress,dress,hair ornament, necklace, jewelry, long hair, earrings, chinese clothes,
architecture,east asian architecture,building,outdoors,rooftop,city,cityscape
```
# 例图 - Examples
(可在文件列表中找到原图,并放入WebUi查看关键词等信息) - (You can find the original image in the file list, and put WebUi to view keywords and other information)
<img src=https://huggingface.co/xiaolxl/GuoFeng3/resolve/main/examples/e1.png>
<img src=https://huggingface.co/xiaolxl/GuoFeng3/resolve/main/examples/e2.png>
<img src=https://huggingface.co/xiaolxl/GuoFeng3/resolve/main/examples/e3.png>
<img src=https://huggingface.co/xiaolxl/GuoFeng3/resolve/main/examples/e4.png> |
muhtasham/small-mlm-glue-qnli-from-scratch-custom-tokenizer-expand-vocab | muhtasham | 2023-02-01T01:16:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-02-01T00:38:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: small-mlm-glue-qnli-from-scratch-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-qnli-from-scratch-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.6626 | 0.4 | 500 | 6.7278 |
| 6.5829 | 0.8 | 1000 | 6.5388 |
| 6.4637 | 1.2 | 1500 | 6.3776 |
| 6.3446 | 1.6 | 2000 | 6.2966 |
| 6.2664 | 2.0 | 2500 | 6.2288 |
| 6.2043 | 2.4 | 3000 | 6.1839 |
| 6.1615 | 2.8 | 3500 | 6.1349 |
| 6.1085 | 3.2 | 4000 | 6.0839 |
| 6.0625 | 3.6 | 4500 | 6.0543 |
| 6.0432 | 4.0 | 5000 | 6.0244 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
PeterDerLustige/dqn-SpaceInvadersNoFrameskip-v4 | PeterDerLustige | 2023-02-01T01:09:05Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-01T01:08:26Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 640.50 +/- 283.52
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PeterDerLustige -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PeterDerLustige -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga PeterDerLustige
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
muhtasham/small-mlm-glue-qnli-custom-tokenizer-expand-vocab | muhtasham | 2023-02-01T01:05:35Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-02-01T00:23:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: small-mlm-glue-qnli-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-qnli-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.5339 | 0.4 | 500 | 4.7224 |
| 4.6477 | 0.8 | 1000 | 4.3242 |
| 4.3146 | 1.2 | 1500 | 3.9988 |
| 4.0046 | 1.6 | 2000 | 3.7777 |
| 3.7942 | 2.0 | 2500 | 3.5976 |
| 3.5684 | 2.4 | 3000 | 3.4426 |
| 3.4406 | 2.8 | 3500 | 3.3275 |
| 3.332 | 3.2 | 4000 | 3.2361 |
| 3.1941 | 3.6 | 4500 | 3.1616 |
| 3.0981 | 4.0 | 5000 | 3.0716 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
Botnoi/wav2vec2-xls-r-300m-th-v7_0 | Botnoi | 2023-02-01T00:57:16Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-01-31T10:01:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-300m-th-v7_0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-th-v7_0
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4099
- Wer: 0.9988
- Cer: 0.7861
- Clean Cer: 0.7617
- Learning Rate: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Clean Cer | Rate |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:---------:|:------:|
| 8.5484 | 0.4 | 500 | 3.6234 | 1.0 | 1.0 | 1.0 | 0.0000 |
| 3.2275 | 0.8 | 1000 | 2.2960 | 0.9998 | 0.7081 | 0.6540 | 0.0000 |
| 0.9955 | 1.2 | 1500 | 1.2224 | 0.9549 | 0.4327 | 0.3756 | 0.0000 |
| 0.66 | 1.61 | 2000 | 0.9559 | 0.9232 | 0.3651 | 0.3040 | 0.0000 |
| 0.546 | 2.01 | 2500 | 0.9207 | 0.9481 | 0.3585 | 0.2826 | 0.0000 |
| 0.4459 | 2.41 | 3000 | 0.7701 | 0.8693 | 0.2940 | 0.2383 | 0.0000 |
| 0.4041 | 2.81 | 3500 | 0.7756 | 0.8224 | 0.2949 | 0.2634 | 0.0000 |
| 0.3637 | 3.21 | 4000 | 0.6015 | 0.7015 | 0.2064 | 0.1807 | 0.0000 |
| 0.334 | 3.61 | 4500 | 0.5615 | 0.6675 | 0.1907 | 0.1638 | 0.0000 |
| 0.3283 | 4.02 | 5000 | 0.6205 | 0.7073 | 0.2092 | 0.1803 | 0.0000 |
| 0.3762 | 4.42 | 5500 | 0.7517 | 0.6366 | 0.1778 | 0.1600 | 0.0000 |
| 0.4954 | 4.82 | 6000 | 0.9374 | 0.7073 | 0.2023 | 0.1735 | 0.0000 |
| 0.5568 | 5.22 | 6500 | 0.8859 | 0.7027 | 0.1982 | 0.1666 | 0.0000 |
| 0.6756 | 5.62 | 7000 | 1.0252 | 0.6802 | 0.1920 | 0.1628 | 0.0000 |
| 0.7752 | 6.02 | 7500 | 1.1259 | 0.7657 | 0.2309 | 0.1908 | 0.0000 |
| 0.8305 | 6.43 | 8000 | 1.3857 | 0.9029 | 0.3252 | 0.2668 | 0.0000 |
| 1.7385 | 6.83 | 8500 | 3.2320 | 0.9998 | 0.9234 | 0.9114 | 0.0000 |
| 2.7839 | 7.23 | 9000 | 3.3238 | 0.9999 | 0.9400 | 0.9306 | 0.0000 |
| 2.8307 | 7.63 | 9500 | 3.2678 | 0.9998 | 0.9167 | 0.9053 | 0.0000 |
| 2.7672 | 8.03 | 10000 | 3.2435 | 0.9995 | 0.8992 | 0.8867 | 0.0000 |
| 2.7426 | 8.43 | 10500 | 3.2396 | 0.9995 | 0.8720 | 0.8561 | 0.0000 |
| 2.7608 | 8.84 | 11000 | 3.2689 | 0.9993 | 0.8399 | 0.8202 | 0.0000 |
| 2.8195 | 9.24 | 11500 | 3.3283 | 0.9989 | 0.8084 | 0.7865 | 0.0000 |
| 2.9044 | 9.64 | 12000 | 3.4099 | 0.9988 | 0.7861 | 0.7617 | 0.0000 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ongkn/smtn_test-deeprl-course | ongkn | 2023-02-01T00:46:01Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-01T00:45:36Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.84 +/- 19.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Jorgeleon/jleonart | Jorgeleon | 2023-02-01T00:45:48Z | 5 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-01T00:34:15Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### jleonart Dreambooth model trained by Jorgeleon with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
swl-models/wind-v2 | swl-models | 2023-02-01T00:43:50Z | 31 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-01-31T14:31:44Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
---
# Model Card for Wind v2
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** 9
- **Shared by:** SWL Models
- **Model type:** Stable Diffusion Checkpoint
- **Language(s) (NLP):** English
- **License:** CreativeML Open RAIL-M
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
muhtasham/tiny-mlm-glue-wnli-custom-tokenizer-expand-vocab | muhtasham | 2023-02-01T00:41:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-02-01T00:27:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-glue-wnli-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-wnli-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.719 | 6.25 | 500 | 3.7847 |
| 4.0062 | 12.5 | 1000 | 3.4790 |
| 3.5607 | 18.75 | 1500 | 3.3417 |
| 3.3 | 25.0 | 2000 | 2.9815 |
| 3.0174 | 31.25 | 2500 | 2.7710 |
| 2.8668 | 37.5 | 3000 | 2.5395 |
| 2.6312 | 43.75 | 3500 | 2.4257 |
| 2.4138 | 50.0 | 4000 | 1.9615 |
| 2.2602 | 56.25 | 4500 | 2.1444 |
| 2.1614 | 62.5 | 5000 | 1.6887 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
RinTrin/climate-tech-test | RinTrin | 2023-02-01T00:40:37Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-02-01T00:40:29Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | |
Zekunli/t5-base-extraction-cnndm_fs0.05-all | Zekunli | 2023-02-01T00:40:12Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-01-31T23:50:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-extraction-cnndm_fs0.05-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-extraction-cnndm_fs0.05-all
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3409 | 0.45 | 200 | 1.9264 |
| 2.0082 | 0.9 | 400 | 1.8570 |
| 1.9247 | 1.35 | 600 | 1.8290 |
| 1.8895 | 1.81 | 800 | 1.8162 |
| 1.8625 | 2.26 | 1000 | 1.8015 |
| 1.8354 | 2.71 | 1200 | 1.7894 |
| 1.8013 | 3.16 | 1400 | 1.7824 |
| 1.7901 | 3.61 | 1600 | 1.7796 |
| 1.7769 | 4.06 | 1800 | 1.7807 |
| 1.7661 | 4.51 | 2000 | 1.7646 |
| 1.7536 | 4.97 | 2200 | 1.7605 |
| 1.9045 | 5.42 | 2400 | 2.1358 |
| 2.4322 | 5.87 | 2600 | 2.3688 |
| 2.4809 | 6.32 | 2800 | 2.3622 |
| 2.4628 | 6.77 | 3000 | 2.3625 |
| 2.4676 | 7.22 | 3200 | 2.3639 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Zekunli/t5-base-extraction-cnndm_fs0.1-all | Zekunli | 2023-02-01T00:34:26Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-01-31T23:50:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-extraction-cnndm_fs0.1-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-extraction-cnndm_fs0.1-all
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2503 | 0.45 | 200 | 1.8495 |
| 1.9367 | 0.9 | 400 | 1.7930 |
| 1.8669 | 1.35 | 600 | 1.7704 |
| 1.8371 | 1.81 | 800 | 1.7481 |
| 1.8051 | 2.26 | 1000 | 1.7362 |
| 1.7843 | 2.71 | 1200 | 1.7345 |
| 1.7669 | 3.16 | 1400 | 1.7159 |
| 1.8786 | 3.61 | 1600 | 1.9442 |
| 2.0554 | 4.06 | 1800 | 1.9691 |
| 2.0521 | 4.51 | 2000 | 1.9731 |
| 2.0579 | 4.97 | 2200 | 1.9744 |
| 2.0514 | 5.42 | 2400 | 1.9743 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
muhtasham/tiny-mlm-glue-stsb-custom-tokenizer-expand-vocab | muhtasham | 2023-02-01T00:26:51Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-02-01T00:12:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-glue-stsb-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-stsb-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.4557 | 0.7 | 500 | 5.4720 |
| 5.7672 | 1.39 | 1000 | 5.1064 |
| 5.4426 | 2.09 | 1500 | 5.0087 |
| 5.1495 | 2.78 | 2000 | 4.8300 |
| 5.186 | 3.48 | 2500 | 4.6170 |
| 5.0339 | 4.17 | 3000 | 4.6473 |
| 4.9093 | 4.87 | 3500 | 4.5680 |
| 4.9144 | 5.56 | 4000 | 4.6118 |
| 4.8143 | 6.26 | 4500 | 4.4421 |
| 4.8065 | 6.95 | 5000 | 4.4684 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
Tune-A-Video-library/a-man-is-surfing | Tune-A-Video-library | 2023-02-01T00:22:22Z | 4 | 10 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"text-to-video",
"tune-a-video",
"arxiv:2212.11565",
"arxiv:2112.10752",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"diffusers:TuneAVideoPipeline",
"region:us"
]
| text-to-image | 2023-01-30T11:18:51Z | ---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
training_prompt: A man is surfing
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- text-to-video
- tune-a-video
inference: false
---
# Tune-A-Video - a-man-is-surfing
## Model description
- Base model: [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4)
- Training prompt: A man is surfing
## Samples
Test prompt: A panda is surfing

## Related papers:
- [Tune-A-Video](https://arxiv.org/abs/2212.11565): One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
- [Stable-Diffusion](https://arxiv.org/abs/2112.10752): High-Resolution Image Synthesis with Latent Diffusion Models
|
ClashSAN/miniSD-quantized-onnx | ClashSAN | 2023-02-01T00:09:21Z | 0 | 1 | null | [
"onnx",
"region:us"
]
| null | 2023-01-31T23:28:26Z | 
- run with only +0.6gb total ram, with this at 192x256
- run with only +1gb total ram, with this at 320x320
```
from diffusers import StableDiffusionOnnxPipeline
import torch
from diffusers import (
DDPMScheduler,
DDIMScheduler,
PNDMScheduler,
LMSDiscreteScheduler,
EulerDiscreteScheduler,
EulerAncestralDiscreteScheduler,
DPMSolverMultistepScheduler
)
scheduler = DPMSolverMultistepScheduler.from_pretrained("./model", subfolder="scheduler")
pipe = StableDiffusionOnnxPipeline.from_pretrained(
'./model',
custom_pipeline="lpw_stable_diffusion_onnx",
revision="onnx",
scheduler=scheduler,
safety_checker=None,
provider="CPUExecutionProvider"
)
prompt = "a photo of "
neg_prompt = ""
generator = torch.Generator(device="cpu").manual_seed(1)
image = pipe.text2img(prompt,negative_prompt=neg_prompt, num_inference_steps=8, width=192, height=256, guidance_scale=10, generator=generator, max_embeddings_multiples=3).images[0]
image.save('./test.png')
``` |
farouk1/kimo | farouk1 | 2023-02-01T00:07:58Z | 0 | 0 | keras | [
"keras",
"art",
"zero-shot-classification",
"ar",
"dataset:fka/awesome-chatgpt-prompts",
"license:openrail",
"region:us"
]
| zero-shot-classification | 2023-02-01T00:05:14Z | ---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
language:
- ar
metrics:
- chrf
library_name: keras
pipeline_tag: zero-shot-classification
tags:
- art
--- |
muhtasham/tiny-mlm-glue-rte-custom-tokenizer-expand-vocab | muhtasham | 2023-01-31T23:53:34Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-31T23:37:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-glue-rte-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-rte-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.249 | 1.6 | 500 | 5.5560 |
| 5.5772 | 3.21 | 1000 | 5.2693 |
| 5.3637 | 4.81 | 1500 | 5.0609 |
| 5.1969 | 6.41 | 2000 | 4.9263 |
| 5.1287 | 8.01 | 2500 | 4.8379 |
| 4.98 | 9.62 | 3000 | 4.6589 |
| 4.8837 | 11.22 | 3500 | 4.6109 |
| 4.8127 | 12.82 | 4000 | 4.5723 |
| 4.6909 | 14.42 | 4500 | 4.4488 |
| 4.5862 | 16.03 | 5000 | 4.4372 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
muhtasham/small-mlm-glue-mnli-from-scratch-custom-tokenizer-expand-vocab | muhtasham | 2023-01-31T23:53:11Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-31T22:40:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: small-mlm-glue-mnli-from-scratch-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-mnli-from-scratch-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.6238 | 0.4 | 500 | 6.6413 |
| 6.5141 | 0.8 | 1000 | 6.4438 |
| 6.3709 | 1.2 | 1500 | 6.2642 |
| 6.2388 | 1.6 | 2000 | 6.1712 |
| 6.1214 | 2.0 | 2500 | 6.1089 |
| 6.0474 | 2.4 | 3000 | 6.0607 |
| 5.9719 | 2.8 | 3500 | 5.9988 |
| 5.9333 | 3.2 | 4000 | 5.9619 |
| 5.9019 | 3.6 | 4500 | 5.9140 |
| 5.9164 | 4.0 | 5000 | 5.8936 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
codeSpaghetti/Reinforce-helicopter-1 | codeSpaghetti | 2023-01-31T23:53:02Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-31T23:38:06Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-helicopter-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 47.50 +/- 42.15
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LastNPCAlex/ER_ohwx_v2_4000 | LastNPCAlex | 2023-01-31T23:51:31Z | 4 | 2 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-01-30T02:36:43Z | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---

# Cyberpunk Edgerunners Fine-Tuned Anything v4.5 (WIP)
This is a Dreambooth fine-tuned Anything v4.5, focusing on trying to capture the style from Cyberpunk Edgerunners, particular when it comes to fashionware (fashionable cyberware), faces, and fashion. Since it was trained with Anything v4.5 This model is intended to produce high-quality, highly detailed anime style with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images.
by [⛅Ascended NPC Alex⛅](https://twitter.com/lastnpcalex)
To introduce a more Pondsmith Cyberpunk inspired style for portraits and cyberware, you need to add **ohwx** to your prompt as demonstated below:
```
(((ohwx))),1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden
```
))%20cyberpunk%2C1girl%2C%20white%20hair%2C%20golden%20eyes%2C%20beautiful%20eyes%2C%20detail%2C%20flower%20meadow%2C%20cumulonimbus%20clouds%2C%20lighting%2C%20detai%20(1).png)
Adding "cyberpunk 2077" or even "cyberpunk edgerunners" yield best results if used along with [Danbooru tags](https://danbooru.donmai.us/tags).
Spin it up and give it a go in colab, chooms:
[](https://colab.research.google.com/drive/1t19hEfbaaJ-Jj57oDFgVsbhY7Dn-Para?usp=sharing)
## Prompt Examples
### Cyberpunk Abraham Lincoln

**Prompt**
```
close-up portrait best quality (((ohwx))),1boy,(((cyberpunk Abraham Lincoln))), wearing netrunner suit, (fashionable, cybernetic, prosthetics), cybereyes, cyberoptics, blurred night city background, detailed, anime style, studio quality, masterpiece, blurred background, ((perfect masculine body)), intricate, 8k, highly detailed, shy, intense, sharp focus, cinematic lighting, studio quality, best quality, studio trigger
```
**Negative**
```
(((deformed))), blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, (ugly), (poorly drawn hands), fused fingers, messy drawing, disfigured, malformed, mutated, anatomical nonsense, text font ui, error, malformed hands, long neck, blurred, lowers, low res, bad anatomy, bad proportions, bad shadow, fused breasts, bad hands, fused hand, missing hand, disappearing arms, disappearing thigh, disappearing calf, disappearing legs, fused ears, bad ears, poorly drawn ears, text, ui, error, missing fingers, missing limb, fused fingers, one hand with more than 5 fingers, one hand with less than5 fingers, one hand with more than 5 digit, one hand with less than 5 digit, extra digit, fewer digits, fused digit, missing digit, bad digit, liquid digit, colorful tongue, black tongue, cropped, watermark, username, blurry, JPEG artifacts, signature, 3D, 3D game, 3D game scene, 3D character, malformed feet, extra feet, bad feet, poorly drawn feet, fused feet, missing feet, extra shoes, bad shoes, fused shoes, more than two shoes, poorly drawn shoes, bad gloves, poorly drawn gloves, fused gloves, bad mouth, fused mouth, poorly drawn mouth, bad tongue, tongue within mouth, too long tongue, black tongue, big mouth, cracked mouth, bad mouth, worst quality, low quality, normal quality, QR code, bar code, censored
```
**Seed**
```
1438659680
```
**Steps:** 140 **CFG:** 11.5 **Sampler:** Euler Ancestral **Dimensions:** 512 x 768
### Edgerunner (Higher Res)

**Prompt**
```
portrait best quality 1girl, (((ohwx))), (((cyberpunk anime (scarlett johansson) ))), 1girl, (((dark black hair))), (short hair), (hands at hips), (fashionable, cybernetic, prosthetics), cyberpunk 2077, cyberpunk edgerunners, blurred night city background, detailed, studio quality, masterpiece, (holster on hip), 8k, uhd, intricate, perfect face, sharp focus, cinematic lighting, best quality, studio trigger, smiling, rebecca, dark hair, stomach, black widow
```
**Negative**
```
(((deformed))), blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, (ugly), (poorly drawn hands), fused fingers, messy drawing, disfigured, malformed, mutated, anatomical nonsense, text font ui, error, malformed hands, long neck, blurred, lowers, low res, bad anatomy, bad proportions, bad shadow, fused breasts, bad hands, fused hand, missing hand, disappearing arms, disappearing thigh, disappearing calf, disappearing legs, fused ears, bad ears, poorly drawn ears, text, ui, error, missing fingers, missing limb, fused fingers, one hand with more than 5 fingers, one hand with less than5 fingers, one hand with more than 5 digit, one hand with less than 5 digit, extra digit, fewer digits, fused digit, missing digit, bad digit, liquid digit, colorful tongue, black tongue, cropped, watermark, username, blurry, JPEG artifacts, signature, 3D, 3D game, 3D game scene, 3D character, malformed feet, extra feet, bad feet, poorly drawn feet, fused feet, missing feet, extra shoes, bad shoes, fused shoes, more than two shoes, poorly drawn shoes, bad gloves, poorly drawn gloves, fused gloves, bad mouth, fused mouth, poorly drawn mouth, bad tongue, tongue within mouth, too long tongue, black tongue, big mouth, cracked mouth, bad mouth, worst quality, low quality, normal quality, QR code, bar code, censored
```
**Seed**
```
4187485374
```
**Steps:** 140 **CFG:** 11.5 **Sampler:** Euler Ancestral **Dimensions:** 512 x 768 |
muhtasham/tiny-mlm-glue-qqp-from-scratch-custom-tokenizer-expand-vocab | muhtasham | 2023-01-31T23:46:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-31T23:25:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-glue-qqp-from-scratch-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-qqp-from-scratch-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.5082 | 0.4 | 500 | 8.3441 |
| 7.6091 | 0.8 | 1000 | 7.0083 |
| 6.6817 | 1.2 | 1500 | 6.5299 |
| 6.3695 | 1.6 | 2000 | 6.3370 |
| 6.256 | 2.0 | 2500 | 6.2553 |
| 6.1524 | 2.4 | 3000 | 6.1720 |
| 6.083 | 2.8 | 3500 | 6.0933 |
| 5.9638 | 3.2 | 4000 | 6.0325 |
| 5.9574 | 3.6 | 4500 | 5.9868 |
| 5.8971 | 4.0 | 5000 | 5.9284 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
tornad100/ppo-LunarLander-v2 | tornad100 | 2023-01-31T23:35:57Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-31T23:35:38Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.48 +/- 17.05
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
robinsk8a/q-FrozenLake-v1-4x4 | robinsk8a | 2023-01-31T23:34:39Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-31T23:34:03Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.72 +/- 0.45
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="robinsk8a/q-FrozenLake-v1-4x4", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
muhtasham/tiny-mlm-glue-qnli-from-scratch-custom-tokenizer-expand-vocab | muhtasham | 2023-01-31T23:22:33Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-31T22:58:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-glue-qnli-from-scratch-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-qnli-from-scratch-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.5281 | 0.4 | 500 | 8.3592 |
| 7.6191 | 0.8 | 1000 | 7.1164 |
| 6.8775 | 1.2 | 1500 | 6.7262 |
| 6.6762 | 1.6 | 2000 | 6.6561 |
| 6.6116 | 2.0 | 2500 | 6.6190 |
| 6.5919 | 2.4 | 3000 | 6.6003 |
| 6.575 | 2.8 | 3500 | 6.5831 |
| 6.5329 | 3.2 | 4000 | 6.5478 |
| 6.5152 | 3.6 | 4500 | 6.5384 |
| 6.5249 | 4.0 | 5000 | 6.5220 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
francisco-perez-sorrosal/dccuchile-distilbert-base-spanish-uncased-finetuned-with-spanish-tweets-clf-cleaned-ds | francisco-perez-sorrosal | 2023-01-31T23:21:10Z | 20 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:dataset",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-01-31T22:14:32Z | ---
tags:
- generated_from_trainer
datasets:
- dataset
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: dccuchile-distilbert-base-spanish-uncased-finetuned-with-spanish-tweets-clf-cleaned-ds
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: dataset
type: dataset
config: 60-20-20
split: dev
args: 60-20-20
metrics:
- name: Accuracy
type: accuracy
value: 0.650310988251555
- name: F1
type: f1
value: 0.6518765643027159
- name: Precision
type: precision
value: 0.6625453481119005
- name: Recall
type: recall
value: 0.6498098682990169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dccuchile-distilbert-base-spanish-uncased-finetuned-with-spanish-tweets-clf-cleaned-ds
This model is a fine-tuned version of [dccuchile/distilbert-base-spanish-uncased](https://huggingface.co/dccuchile/distilbert-base-spanish-uncased) on the dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6605
- Accuracy: 0.6503
- F1: 0.6519
- Precision: 0.6625
- Recall: 0.6498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.8558 | 1.0 | 543 | 0.7705 | 0.6628 | 0.6408 | 0.6585 | 0.6404 |
| 0.5485 | 2.0 | 1086 | 0.8657 | 0.6593 | 0.6436 | 0.6578 | 0.6388 |
| 0.3071 | 3.0 | 1629 | 1.3021 | 0.6586 | 0.6556 | 0.6551 | 0.6581 |
| 0.1581 | 4.0 | 2172 | 1.6605 | 0.6503 | 0.6519 | 0.6625 | 0.6498 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
neurator/ppo-Huggy | neurator | 2023-01-31T23:17:50Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-01-31T23:17:43Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: neurator/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
patrickvonplaten/lora_dreambooth_dog_example | patrickvonplaten | 2023-01-31T23:17:36Z | 18 | 2 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-01-18T14:59:36Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - https://huggingface.co/patrickvonplaten/dummy
These are LoRA adaption weights for https://huggingface.co/patrickvonplaten/dummy. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.



 |
muhtasham/tiny-mlm-glue-mrpc-from-scratch-custom-tokenizer-expand-vocab | muhtasham | 2023-01-31T22:56:24Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-31T22:42:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-glue-mrpc-from-scratch-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-mrpc-from-scratch-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.4592 | 1.09 | 500 | 8.3426 |
| 7.6504 | 2.18 | 1000 | 7.1366 |
| 6.9286 | 3.27 | 1500 | 6.8071 |
| 6.7176 | 4.36 | 2000 | 6.6950 |
| 6.6385 | 5.45 | 2500 | 6.7215 |
| 6.6329 | 6.54 | 3000 | 6.6601 |
| 6.5819 | 7.63 | 3500 | 6.6904 |
| 6.5994 | 8.71 | 4000 | 6.6968 |
| 6.5415 | 9.8 | 4500 | 6.6803 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
muhtasham/tiny-mlm-glue-mrpc-custom-tokenizer-expand-vocab | muhtasham | 2023-01-31T22:44:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-31T22:28:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-glue-mrpc-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-mrpc-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.1957 | 1.09 | 500 | 5.5172 |
| 5.5021 | 2.18 | 1000 | 5.1265 |
| 5.2379 | 3.27 | 1500 | 5.0413 |
| 5.1491 | 4.36 | 2000 | 4.9136 |
| 5.014 | 5.45 | 2500 | 4.8558 |
| 4.9507 | 6.54 | 3000 | 4.7338 |
| 4.7924 | 7.63 | 3500 | 4.6922 |
| 4.7739 | 8.71 | 4000 | 4.6100 |
| 4.6749 | 9.8 | 4500 | 4.6575 |
| 4.6135 | 10.89 | 5000 | 4.4922 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
muhtasham/tiny-mlm-glue-mnli-from-scratch-custom-tokenizer-expand-vocab | muhtasham | 2023-01-31T22:40:03Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-31T22:15:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-glue-mnli-from-scratch-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-mnli-from-scratch-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.551 | 0.4 | 500 | 8.3700 |
| 7.6033 | 0.8 | 1000 | 7.0571 |
| 6.8118 | 1.2 | 1500 | 6.6286 |
| 6.587 | 1.6 | 2000 | 6.5362 |
| 6.5035 | 2.0 | 2500 | 6.5191 |
| 6.4641 | 2.4 | 3000 | 6.5131 |
| 6.4413 | 2.8 | 3500 | 6.4910 |
| 6.4254 | 3.2 | 4000 | 6.4725 |
| 6.4222 | 3.6 | 4500 | 6.4410 |
| 6.4471 | 4.0 | 5000 | 6.4416 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
muhtasham/small-mlm-glue-cola-from-scratch-custom-tokenizer-expand-vocab | muhtasham | 2023-01-31T22:37:54Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-31T21:58:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: small-mlm-glue-cola-from-scratch-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-cola-from-scratch-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.5029 | 0.47 | 500 | 6.2502 |
| 6.2048 | 0.94 | 1000 | 6.0975 |
| 5.9473 | 1.4 | 1500 | 5.8848 |
| 5.839 | 1.87 | 2000 | 5.9288 |
| 5.7542 | 2.34 | 2500 | 5.7539 |
| 5.7137 | 2.81 | 3000 | 5.5897 |
| 5.6052 | 3.27 | 3500 | 5.7464 |
| 5.4948 | 3.74 | 4000 | 5.5480 |
| 5.5009 | 4.21 | 4500 | 5.5464 |
| 5.3884 | 4.68 | 5000 | 5.3242 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
Shularp/mt5-small-finetuned-ar-to-th-3rd-round | Shularp | 2023-01-31T22:35:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-01-31T19:28:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-small-finetuned-ar-to-th-finetuned-ar-to-th-2nd-round-finetuned-ar-to-th-3rd-round
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-ar-to-th-finetuned-ar-to-th-2nd-round-finetuned-ar-to-th-3rd-round
This model is a fine-tuned version of [Shularp/mt5-small-finetuned-ar-to-th](https://huggingface.co/Shularp/mt5-small-finetuned-ar-to-th) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7393
- Bleu: 5.3860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5917 | 1.0 | 16806 | 2.8661 | 4.1430 |
| 3.432 | 2.0 | 33612 | 2.7698 | 5.0779 |
| 3.3793 | 3.0 | 50418 | 2.7393 | 5.3860 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
muhtasham/tiny-mlm-glue-mnli-custom-tokenizer-expand-vocab | muhtasham | 2023-01-31T22:26:15Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-31T22:01:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-glue-mnli-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-mnli-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.183 | 0.4 | 500 | 5.4864 |
| 5.4557 | 0.8 | 1000 | 5.1647 |
| 5.2719 | 1.2 | 1500 | 4.9770 |
| 5.1557 | 1.6 | 2000 | 4.8840 |
| 5.0024 | 2.0 | 2500 | 4.8020 |
| 4.9317 | 2.4 | 3000 | 4.7280 |
| 4.834 | 2.8 | 3500 | 4.6366 |
| 4.7678 | 3.2 | 4000 | 4.5993 |
| 4.7007 | 3.6 | 4500 | 4.5062 |
| 4.6734 | 4.0 | 5000 | 4.4644 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
francisco-perez-sorrosal/distilbert-base-uncased-finetuned-with-spanish-tweets-clf-cleaned-ds | francisco-perez-sorrosal | 2023-01-31T22:20:07Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:dataset",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-01-31T20:15:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- dataset
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-base-uncased-finetuned-with-spanish-tweets-clf-cleaned-ds
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: dataset
type: dataset
config: 60-20-20
split: dev
args: 60-20-20
metrics:
- name: Accuracy
type: accuracy
value: 0.5556323427781618
- name: F1
type: f1
value: 0.5577964748279268
- name: Precision
type: precision
value: 0.5682169161320979
- name: Recall
type: recall
value: 0.5539741666889855
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-with-spanish-tweets-clf-cleaned-ds
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1229
- Accuracy: 0.5556
- F1: 0.5578
- Precision: 0.5682
- Recall: 0.5540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0683 | 1.0 | 543 | 1.0019 | 0.4997 | 0.4041 | 0.4724 | 0.4488 |
| 0.9372 | 2.0 | 1086 | 0.9395 | 0.5425 | 0.5143 | 0.5480 | 0.5123 |
| 0.7283 | 3.0 | 1629 | 0.9674 | 0.5632 | 0.5615 | 0.5658 | 0.5587 |
| 0.5127 | 4.0 | 2172 | 1.1229 | 0.5556 | 0.5578 | 0.5682 | 0.5540 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
muhtasham/tiny-mlm-glue-cola-from-scratch-custom-tokenizer-expand-vocab | muhtasham | 2023-01-31T22:11:57Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-31T21:57:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-glue-cola-from-scratch-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-cola-from-scratch-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.4335 | 0.47 | 500 | 8.2468 |
| 7.5357 | 0.94 | 1000 | 7.0009 |
| 6.6657 | 1.4 | 1500 | 6.5157 |
| 6.4131 | 1.87 | 2000 | 6.4843 |
| 6.3502 | 2.34 | 2500 | 6.3349 |
| 6.3256 | 2.81 | 3000 | 6.2425 |
| 6.2323 | 3.27 | 3500 | 6.3152 |
| 6.1486 | 3.74 | 4000 | 6.1898 |
| 6.1377 | 4.21 | 4500 | 6.1829 |
| 6.065 | 4.68 | 5000 | 6.0159 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
muhtasham/tiny-mlm-glue-cola-custom-tokenizer-expand-vocab | muhtasham | 2023-01-31T21:57:29Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-31T21:43:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-glue-cola-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-cola-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.6267 | 0.47 | 500 | 4.9363 |
| 5.0496 | 0.94 | 1000 | 4.7414 |
| 4.7524 | 1.4 | 1500 | 4.5982 |
| 4.6772 | 1.87 | 2000 | 4.5334 |
| 4.543 | 2.34 | 2500 | 4.3460 |
| 4.5676 | 2.81 | 3000 | 4.1526 |
| 4.419 | 3.27 | 3500 | 4.3221 |
| 4.3187 | 3.74 | 4000 | 4.0862 |
| 4.3635 | 4.21 | 4500 | 4.1023 |
| 4.2545 | 4.68 | 5000 | 3.8843 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
njrosati/ppo-LunarLander-v2 | njrosati | 2023-01-31T21:56:50Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-31T21:56:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.29 +/- 20.42
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dn-gh/ppo-Huggy | dn-gh | 2023-01-31T21:51:00Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-01-31T21:46:11Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://singularite.itch.io/huggy
2. Step 1: Write your model_id: dn-gh/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
iammartian0/taxi_v3_416 | iammartian0 | 2023-01-31T21:44:56Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-31T21:44:52Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_v3_416
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="iammartian0/taxi_v3_416", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
epinnock/flan-t5-small-codeparrot-xlcost-text-to-code | epinnock | 2023-01-31T21:35:46Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xlcost-text-to-code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-01-31T05:44:06Z | ---
tags:
- generated_from_trainer
datasets:
- xlcost-text-to-code
model-index:
- name: flan-t5-small-codeparrot-xlcost-text-to-code
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-codeparrot-xlcost-text-to-code
This model is a fine-tuned version of [epinnock/flan-t5-small-codeparrot-xlcost-text-to-code](https://huggingface.co/epinnock/flan-t5-small-codeparrot-xlcost-text-to-code) on the xlcost-text-to-code dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.0+cu116
- Datasets 2.9.0
- Tokenizers 0.12.1
|
l-tran/distilroberta-base-OLID-MLM | l-tran | 2023-01-31T21:31:38Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-01-31T21:25:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-OLID-MLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-OLID-MLM
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 398 | 0.0143 |
| 1.0511 | 2.0 | 796 | 0.0031 |
| 0.0256 | 3.0 | 1194 | 0.0021 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
garyw/clinical-embeddings-100d-ft-oa-all | garyw | 2023-01-31T21:25:02Z | 0 | 0 | null | [
"license:gpl-3.0",
"region:us"
]
| null | 2023-01-31T20:56:00Z | ---
license: gpl-3.0
---
Pre-trained word embeddings using the text of published scientific manuscripts. These embeddings use 100 dimensions and were trained using the fasttext algorithm on all available manuscripts found in the [PMC Open Access Subset](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). See the paper here: https://pubmed.ncbi.nlm.nih.gov/34920127/
Citation:
```
@article{flamholz2022word,
title={Word embeddings trained on published case reports are lightweight, effective for clinical tasks, and free of protected health information},
author={Flamholz, Zachary N and Crane-Droesch, Andrew and Ungar, Lyle H and Weissman, Gary E},
journal={Journal of Biomedical Informatics},
volume={125},
pages={103971},
year={2022},
publisher={Elsevier}
}
```
## Quick start
Word embeddings are compatible with the [`gensim` Python package](https://radimrehurek.com/gensim/) format.
First download the files from this archive. Then load the embeddings into Python.
```python
from gensim.models import FastText, Word2Vec, KeyedVectors # KeyedVectors are used to load the GloVe models
# Load the model
model = FastText.load('ft_oa_all_100d.bin')
# Return 100-dimensional vector representations of each word
model.wv.word_vec('diabetes')
model.wv.word_vec('cardiac_arrest')
model.wv.word_vec('lymphangioleiomyomatosis')
# Try out cosine similarity
model.wv.similarity('copd', 'chronic_obstructive_pulmonary_disease')
model.wv.similarity('myocardial_infarction', 'heart_attack')
model.wv.similarity('lymphangioleiomyomatosis', 'lam')
``` |
Rostlab/prot_t5_xl_uniref50 | Rostlab | 2023-01-31T21:05:58Z | 12,086,185 | 42 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"protein language model",
"dataset:UniRef50",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- protein language model
datasets:
- UniRef50
---
# ProtT5-XL-UniRef50 model
Pretrained model on protein sequences using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in
[this repository](https://github.com/agemagician/ProtTrans). This model is trained on uppercase amino acids: it only works with capital letter amino acids.
## Model description
ProtT5-XL-UniRef50 is based on the `t5-3b` model and was pretrained on a large corpus of protein sequences in a self-supervised fashion.
This means it was pretrained on the raw protein sequences only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those protein sequences.
One important difference between this T5 model and the original T5 version is the denosing objective.
The original T5-3B model was pretrained using a span denosing objective, while this model was pre-trained with a Bart-like MLM denosing objective.
The masking probability is consistent with the original T5 training by randomly masking 15% of the amino acids in the input.
It has been shown that the features extracted from this self-supervised model (LM-embeddings) captured important biophysical properties governing protein shape.
shape.
This implied learning some of the grammar of the language of life realized in protein sequences.
## Intended uses & limitations
The model could be used for protein feature extraction or to be fine-tuned on downstream tasks.
We have noticed in some tasks on can gain more accuracy by fine-tuning the model rather than using it as a feature extractor.
We have also noticed that for feature extraction, its better to use the feature extracted from the encoder not from the decoder.
### How to use
Here is how to use this model to extract the features of a given protein sequence in PyTorch:
```python
sequence_examples = ["PRTEINO", "SEQWENCE"]
# this will replace all rare/ambiguous amino acids by X and introduce white-space between all amino acids
sequence_examples = [" ".join(list(re.sub(r"[UZOB]", "X", sequence))) for sequence in sequence_examples]
# tokenize sequences and pad up to the longest sequence in the batch
ids = tokenizer.batch_encode_plus(sequence_examples, add_special_tokens=True, padding="longest")
input_ids = torch.tensor(ids['input_ids']).to(device)
attention_mask = torch.tensor(ids['attention_mask']).to(device)
# generate embeddings
with torch.no_grad():
embedding_repr = model(input_ids=input_ids,attention_mask=attention_mask)
# extract embeddings for the first ([0,:]) sequence in the batch while removing padded & special tokens ([0,:7])
emb_0 = embedding_repr.last_hidden_state[0,:7] # shape (7 x 1024)
print(f"Shape of per-residue embedding of first sequences: {emb_0.shape}")
# do the same for the second ([1,:]) sequence in the batch while taking into account different sequence lengths ([1,:8])
emb_1 = embedding_repr.last_hidden_state[1,:8] # shape (8 x 1024)
# if you want to derive a single representation (per-protein embedding) for the whole protein
emb_0_per_protein = emb_0.mean(dim=0) # shape (1024)
print(f"Shape of per-protein embedding of first sequences: {emb_0_per_protein.shape}")
```
## Training data
The ProtT5-XL-UniRef50 model was pretrained on [UniRef50](https://www.uniprot.org/help/uniref), a dataset consisting of 45 million protein sequences.
## Training procedure
### Preprocessing
The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 21. The rare amino acids "U,Z,O,B" were mapped to "X".
The inputs of the model are then of the form:
```
Protein Sequence [EOS]
```
The preprocessing step was performed on the fly, by cutting and padding the protein sequences up to 512 tokens.
The details of the masking procedure for each sequence are as follows:
- 15% of the amino acids are masked.
- In 90% of the cases, the masked amino acids are replaced by `[MASK]` token.
- In 10% of the cases, the masked amino acids are replaced by a random amino acid (different) from the one they replace.
### Pretraining
The model was trained on a single TPU Pod V2-256 for 991.5 thousand steps in total, using sequence length 512 (batch size 2k).
It was trained using ProtT5-XL-BFD model as an initial checkpoint, rather than training from scratch.
It has a total of approximately 3B parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
When the model is used for feature extraction, this model achieves the following results:
Test results :
| Task/Dataset | secondary structure (3-states) | secondary structure (8-states) | Localization | Membrane |
|:-----:|:-----:|:-----:|:-----:|:-----:|
| CASP12 | 81 | 70 | | |
| TS115 | 87 | 77 | | |
| CB513 | 86 | 74 | | |
| DeepLoc | | | 81 | 91 |
### BibTeX entry and citation info
```bibtex
@article {Elnaggar2020.07.12.199554,
author = {Elnaggar, Ahmed and Heinzinger, Michael and Dallago, Christian and Rehawi, Ghalia and Wang, Yu and Jones, Llion and Gibbs, Tom and Feher, Tamas and Angerer, Christoph and Steinegger, Martin and BHOWMIK, DEBSINDHU and Rost, Burkhard},
title = {ProtTrans: Towards Cracking the Language of Life{\textquoteright}s Code Through Self-Supervised Deep Learning and High Performance Computing},
elocation-id = {2020.07.12.199554},
year = {2020},
doi = {10.1101/2020.07.12.199554},
publisher = {Cold Spring Harbor Laboratory},
abstract = {Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive language models (Transformer-XL, XLNet) and two auto-encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22- and 112 times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3=76-84, 8 states: Q8=65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. Availability ProtTrans: \<a href="https://github.com/agemagician/ProtTrans"\>https://github.com/agemagician/ProtTrans\</a\>Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554},
eprint = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554.full.pdf},
journal = {bioRxiv}
}
```
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
garyw/clinical-embeddings-300d-w2v-oa-all | garyw | 2023-01-31T20:58:03Z | 0 | 0 | null | [
"license:gpl-3.0",
"region:us"
]
| null | 2023-01-31T19:37:36Z | ---
license: gpl-3.0
---
Pre-trained word embeddings using the text of published biomedical manuscripts. These embeddings use 300 dimensions and were trained using the word2vec algorithm on all available manuscripts found in the [PMC Open Access Subset](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). See the paper here: https://pubmed.ncbi.nlm.nih.gov/34920127/
Citation:
```
@article{flamholz2022word,
title={Word embeddings trained on published case reports are lightweight, effective for clinical tasks, and free of protected health information},
author={Flamholz, Zachary N and Crane-Droesch, Andrew and Ungar, Lyle H and Weissman, Gary E},
journal={Journal of Biomedical Informatics},
volume={125},
pages={103971},
year={2022},
publisher={Elsevier}
}
```
## Quick start
Word embeddings are compatible with the [`gensim` Python package](https://radimrehurek.com/gensim/) format.
First download the files from this archive. Then load the embeddings into Python.
```python
from gensim.models import FastText, Word2Vec, KeyedVectors # KeyedVectors are used to load the GloVe models
# Load the model
model = Word2Vec.load('w2v_oa_all_300d.bin')
# Return 100-dimensional vector representations of each word
model.wv.word_vec('diabetes')
model.wv.word_vec('cardiac_arrest')
model.wv.word_vec('lymphangioleiomyomatosis')
# Try out cosine similarity
model.wv.similarity('copd', 'chronic_obstructive_pulmonary_disease')
model.wv.similarity('myocardial_infarction', 'heart_attack')
model.wv.similarity('lymphangioleiomyomatosis', 'lam')
``` |
arif5894/http | arif5894 | 2023-01-31T20:51:14Z | 0 | 1 | null | [
"region:us"
]
| null | 2023-01-31T20:50:02Z | JOIN US IN THE Link RUNNING TO PARTICIPATE
Watch 54th Running of the GFL International 500 Snowmobile Race Week Live Stream
𝐋𝐢𝐯𝐞 𝐋𝐢𝐧𝐤🔗 http://allplusonline.xyz/500-Snowmobile/ |
Lakoc/dqn-SpaceInvadersNoFrameskip-v4 | Lakoc | 2023-01-31T20:51:00Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-31T20:50:20Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 633.50 +/- 166.03
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Lakoc -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Lakoc -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Lakoc
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
techzizou/yolov8x | techzizou | 2023-01-31T20:22:01Z | 0 | 0 | null | [
"object detection",
"computer vision",
"machine learning",
"yolo",
"yolov8",
"dataset:detection-datasets/coco",
"license:gpl-3.0",
"region:us"
]
| null | 2023-01-31T19:25:25Z | ---
license: gpl-3.0
datasets:
- detection-datasets/coco
tags:
- object detection
- computer vision
- machine learning
- yolo
- yolov8
---
### Model Description
[Ultralytics:](https://github.com/ultralytics/ultralytics/) YOLOv8 in PyTorch > ONNX > CoreML > TFLite
### Installation
```
pip install ultralytics
```
### Yolov8 Inference
```python
from ultralytics import YOLO
model = YOLO('techzizou/yolov8x')
model.conf = conf_threshold
model.iou = iou_threshold
prediction = model.predict(image, imgsz=image_size, show=False, save=False)
``` |
michalcisek5/PPO-LunarLander-v2 | michalcisek5 | 2023-01-31T20:10:03Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-28T16:20:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.30 +/- 21.69
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
TYTSAI/ppo-LunarLander-v2-3 | TYTSAI | 2023-01-31T20:00:31Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-31T20:00:09Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.61 +/- 24.02
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NihiLicA/ppo-LunarLander-v2 | NihiLicA | 2023-01-31T19:41:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-31T19:13:03Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.13 +/- 17.02
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bhalll/q-Taxi-v3 | bhalll | 2023-01-31T19:29:51Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-31T19:29:48Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="bhalll/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
shri07/my_awesome_billsum_model | shri07 | 2023-01-31T19:27:26Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-01-31T19:13:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1486
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5092
- Rouge1: 0.1486
- Rouge2: 0.0551
- Rougel: 0.123
- Rougelsum: 0.1226
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7960 | 0.131 | 0.0394 | 0.1101 | 0.1099 | 19.0 |
| No log | 2.0 | 124 | 2.5857 | 0.1389 | 0.0502 | 0.1167 | 0.1162 | 19.0 |
| No log | 3.0 | 186 | 2.5245 | 0.1462 | 0.054 | 0.1217 | 0.1215 | 19.0 |
| No log | 4.0 | 248 | 2.5092 | 0.1486 | 0.0551 | 0.123 | 0.1226 | 19.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
RTT/ppo-Huggy | RTT | 2023-01-31T19:23:31Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-01-31T19:23:23Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: RTT/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lauralex/distilbert-base-uncased-finetuned-emotion | lauralex | 2023-01-31T19:16:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-01-29T21:07:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2992
- eval_accuracy: 0.9085
- eval_f1: 0.9069
- eval_runtime: 2.8799
- eval_samples_per_second: 694.475
- eval_steps_per_second: 11.112
- epoch: 1.0
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
stinoco/ppo-Huggy | stinoco | 2023-01-31T19:06:24Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-01-31T19:06:17Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: stinoco/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
tomekkorbak/cranky_lichterman | tomekkorbak | 2023-01-31T19:03:41Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| null | 2023-01-30T18:01:37Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: cranky_lichterman
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cranky_lichterman
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 4096}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'gpt3_kwargs': {'model_name': 'davinci'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'cranky_lichterman',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 5070,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/1erso18p |
EstherT/clasificador-muchocine | EstherT | 2023-01-31T19:02:02Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"es",
"dataset:muchocine",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-01-31T18:00:57Z | ---
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
datasets:
- muchocine
language:
- es
---
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the 'muchocine' dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3389
- Accuracy: 0.4671
## Model description
This model predicts a 1-5 star_rating for a movie based on a short review in Spanish.
## Training and evaluation data
The model uses the train split of the 'muchocine' dataset, containing 3,872 reviews.
## Training procedure
The original dataset was randomized and subsequently split into a training set (80%) and a testing set (20%).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.2937 | 0.4335 |
| 1.4261 | 2.0 | 776 | 1.2515 | 0.4839 |
| 1.0492 | 3.0 | 1164 | 1.3389 | 0.4671 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2 |
odahl/ppo-Pyramids | odahl | 2023-01-31T18:45:00Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-01-31T18:43:19Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: odahl/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
phonenix/Pixelcopter-PLE-v0 | phonenix | 2023-01-31T18:13:02Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-31T18:12:11Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 28.20 +/- 20.40
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
garyw/clinical-embeddings-100d-w2v-cr | garyw | 2023-01-31T18:12:49Z | 0 | 2 | null | [
"license:gpl-3.0",
"region:us"
]
| null | 2023-01-23T19:46:38Z | ---
license: gpl-3.0
---
Pre-trained word embeddings using the text of published clinical case reports. These embeddings use 100 dimensions and were trained using the word2vec algorithm on published clinical case reports found in the [PMC Open Access Subset](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). See the paper here: https://pubmed.ncbi.nlm.nih.gov/34920127/
Citation:
```
@article{flamholz2022word,
title={Word embeddings trained on published case reports are lightweight, effective for clinical tasks, and free of protected health information},
author={Flamholz, Zachary N and Crane-Droesch, Andrew and Ungar, Lyle H and Weissman, Gary E},
journal={Journal of Biomedical Informatics},
volume={125},
pages={103971},
year={2022},
publisher={Elsevier}
}
```
## Quick start
Word embeddings are compatible with the [`gensim` Python package](https://radimrehurek.com/gensim/) format.
First download the files from this archive. Then load the embeddings into Python.
```python
from gensim.models import FastText, Word2Vec, KeyedVectors # KeyedVectors are used to load the GloVe models
# Load the model
model = Word2Vec.load('w2v_oa_cr_100d.bin')
# Return 100-dimensional vector representations of each word
model.wv.word_vec('diabetes')
model.wv.word_vec('cardiac_arrest')
model.wv.word_vec('lymphangioleiomyomatosis')
# Try out cosine similarity
model.wv.similarity('copd', 'chronic_obstructive_pulmonary_disease')
model.wv.similarity('myocardial_infarction', 'heart_attack')
model.wv.similarity('lymphangioleiomyomatosis', 'lam')
``` |
garyw/clinical-embeddings-300d-w2v-cr | garyw | 2023-01-31T18:12:22Z | 0 | 1 | null | [
"license:gpl-3.0",
"region:us"
]
| null | 2023-01-31T13:52:49Z | ---
license: gpl-3.0
---
Pre-trained word embeddings using the text of published clinical case reports. These embeddings use 300 dimensions and were trained using the word2vec algorithm on published clinical case reports found in the [PMC Open Access Subset](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). See the paper here: https://pubmed.ncbi.nlm.nih.gov/34920127/
Citation:
```
@article{flamholz2022word,
title={Word embeddings trained on published case reports are lightweight, effective for clinical tasks, and free of protected health information},
author={Flamholz, Zachary N and Crane-Droesch, Andrew and Ungar, Lyle H and Weissman, Gary E},
journal={Journal of Biomedical Informatics},
volume={125},
pages={103971},
year={2022},
publisher={Elsevier}
}
```
## Quick start
Word embeddings are compatible with the [`gensim` Python package](https://radimrehurek.com/gensim/) format.
First download the files from this archive. Then load the embeddings into Python.
```python
from gensim.models import FastText, Word2Vec, KeyedVectors # KeyedVectors are used to load the GloVe models
# Load the model
model = Word2Vec.load('w2v_oa_cr_300d.bin')
# Return 100-dimensional vector representations of each word
model.wv.word_vec('diabetes')
model.wv.word_vec('cardiac_arrest')
model.wv.word_vec('lymphangioleiomyomatosis')
# Try out cosine similarity
model.wv.similarity('copd', 'chronic_obstructive_pulmonary_disease')
model.wv.similarity('myocardial_infarction', 'heart_attack')
model.wv.similarity('lymphangioleiomyomatosis', 'lam')
``` |
garyw/clinical-embeddings-600d-w2v-cr | garyw | 2023-01-31T18:11:54Z | 0 | 1 | null | [
"license:gpl-3.0",
"region:us"
]
| null | 2023-01-31T13:57:25Z | ---
license: gpl-3.0
---
Pre-trained word embeddings using the text of published clinical case reports. These embeddings use 600 dimensions and were trained using the word2vec algorithm on published clinical case reports found in the [PMC Open Access Subset](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). See the paper here: https://pubmed.ncbi.nlm.nih.gov/34920127/
Citation:
```
@article{flamholz2022word,
title={Word embeddings trained on published case reports are lightweight, effective for clinical tasks, and free of protected health information},
author={Flamholz, Zachary N and Crane-Droesch, Andrew and Ungar, Lyle H and Weissman, Gary E},
journal={Journal of Biomedical Informatics},
volume={125},
pages={103971},
year={2022},
publisher={Elsevier}
}
```
## Quick start
Word embeddings are compatible with the [`gensim` Python package](https://radimrehurek.com/gensim/) format.
First download the files from this archive. Then load the embeddings into Python.
```python
from gensim.models import FastText, Word2Vec, KeyedVectors # KeyedVectors are used to load the GloVe models
# Load the model
model = Word2Vec.load('w2v_oa_cr_600d.bin')
# Return 100-dimensional vector representations of each word
model.wv.word_vec('diabetes')
model.wv.word_vec('cardiac_arrest')
model.wv.word_vec('lymphangioleiomyomatosis')
# Try out cosine similarity
model.wv.similarity('copd', 'chronic_obstructive_pulmonary_disease')
model.wv.similarity('myocardial_infarction', 'heart_attack')
model.wv.similarity('lymphangioleiomyomatosis', 'lam')
``` |
mallycrip/q-FrozenLake-v1-4x4-noSlippery | mallycrip | 2023-01-31T17:59:49Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-31T17:58:32Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mallycrip/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Enru/ppo_LunarLanderV2 | Enru | 2023-01-31T17:31:26Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-31T17:20:58Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -160.05 +/- 82.57
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aj555/a2c-PandaReachDense-v2 | aj555 | 2023-01-31T17:26:04Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-25T14:57:02Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.78 +/- 0.31
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jchhabra/bert-finetuned-squad | jchhabra | 2023-01-31T17:15:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-01-31T14:16:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Elifr/clasificador-rotten-tomatoes | Elifr | 2023-01-31T17:04:46Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-01-31T17:01:04Z | ---
tags:
- classification
- generated_from_trainer
datasets:
- rotten_tomatoes
model-index:
- name: clasificador-rotten-tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-rotten-tomatoes
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the rotten_tomatoes dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
eevvgg/roberta-base-sentiment-politics | eevvgg | 2023-01-31T16:54:30Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"text",
"sentiment",
"politics",
"pl",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-01-30T12:29:04Z | ---
language:
- pl
- en
pipeline_tag: text-classification
widget:
- text: TRUMP needs undecided voters
example_title: example 1
- text: Oczywiście ze Pan Prezydent to nasza duma narodowa!!
example_title: example 2
tags:
- text
- sentiment
- politics
- text-classification
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: sentimenTw-political
results:
- task:
type: text-classification
name: Text Classification
dataset:
type: social media
name: politics
metrics:
- type: f1 macro
value: 71.2
- type: accuracy
value: 74
---
# eevvgg/sentimenTw-political
This model is a fine-tuned version of multilingual model [cardiffnlp/twitter-xlm-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment).
Classification of text sentiment into 3 categories: negative, neutral, positive.
Fine-tuned on a 2k sample of manually annotated Reddit (EN) and Twitter (PL) data.
- **Developed by:** Ewelina Gajewska as a part of ComPathos project: https://www.ncn.gov.pl/sites/default/files/listy-rankingowe/2020-09-30apsv2/streszczenia/497124-en.pdf
- **Model type:** RoBERTa for sentiment classification
- **Language(s) (NLP):** Multilingual; finetuned on 1k English text from Reddit and 1k Polish tweets
- **License:** [More Information Needed]
- **Finetuned from model:** [cardiffnlp/twitter-xlm-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment)
# Uses
Sentiment classification in multilingual data. Fine-tuned on a 2k English and Polish sample of social media texts from political domain.
Model suited for short text (up to 200 tokens) .
## How to Get Started with the Model
```
from transformers import pipeline
model_path = "eevvgg/sentimenTw-political"
sentiment_task = pipeline(task = "text-classification", model = model_path, tokenizer = model_path)
sequence = ["TRUMP needs undecided voters",
"Oczywiście ze Pan Prezydent to nasza duma narodowa!!"]
result = sentiment_task(sequence)
labels = [i['label'] for i in result] # ['neutral', 'positive']
```
## Model Sources
- **Repository:** [Colab notebook](https://colab.research.google.com/drive/1Rqgjp2tlReZ-hOZz63jw9cIwcZmcL9lR?usp=sharing)
- **Paper:** TBA
- **BibTex citation:**
```
@misc{SentimenTwGK2023,
author={Gajewska, Ewelina and Konat, Barbara},
title={SentimenTw XLM-RoBERTa-base Model for Multilingual Sentiment Classification on Social Media},
year={2023},
howpublished = {\url{https://huggingface.co/eevvgg/sentimenTw-political}},
}
```
# Training Details
- Trained for 3 epochs, mini-batch size of 8.
- Training results: loss: 0.515
- See details in [Colab notebook](https://colab.research.google.com/drive/1Rqgjp2tlReZ-hOZz63jw9cIwcZmcL9lR?usp=sharing)
### Preprocessing
- Hyperlinks and user mentions (@) normalization to "http" and "@user" tokens, respectively. Removal of extra spaces.
### Speeds, Sizes, Times
- See [Colab notebook](https://colab.research.google.com/drive/1Rqgjp2tlReZ-hOZz63jw9cIwcZmcL9lR?usp=sharing)
# Evaluation
- Evaluation run on a sample of 200 texts (10\% of data).
## Results
- accuracy: 74.0
- macro avg:
- f1: 71.2
- precision: 72.8
- recall: 70.8
- weighted avg:
- f1: 73.3
- precision: 74.0
- recall: 74.0
precision recall f1-score support
negative 0.752 0.901 0.820 91
neutral 0.764 0.592 0.667 71
positive 0.667 0.632 0.649 38
# Citation
**BibTeX:**
```
@misc{SentimenTwGK2023,
author={Gajewska, Ewelina and Konat, Barbara},
title={SentimenTw XLM-RoBERTa-base Model for Multilingual Sentiment Classification on Social Media},
year={2023},
howpublished = {\url{https://huggingface.co/eevvgg/sentimenTw-political}},
}
```
**APA:**
```
Gajewska, E., & Konat, B. (2023).
SentimenTw XLM-RoBERTa-base Model for Multilingual Sentiment Classification on Social Media.
https://huggingface.co/eevvgg/sentimenTw-political.
``` |
regnore/Kudou_Chitose_LoRA | regnore | 2023-01-31T16:53:25Z | 0 | 3 | null | [
"license:cc-by-2.0",
"region:us"
]
| null | 2023-01-31T14:51:50Z | ---
license: cc-by-2.0
---
+ Kudou_Chitose:

+ Kudou_Chitose_v2:

+ Kudou_Chitose_SD:

+ additional prompts you may need to get better results:
`yellow eyes`, `white hair`, `white bloomers`, `hat`
|
bdrighes/Taxi-v3 | bdrighes | 2023-01-31T16:38:24Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-31T16:32:35Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="bdrighes/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Hatman/bert-finetuned-ner | Hatman | 2023-01-31T16:17:28Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-01-30T17:24:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0814
## Model description
bert-base-NER is a fine-tuned BERT model that is ready to use for Named Entity Recognition and achieves state-of-the-art performance for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC).
Specifically, this model is a bert-base-cased model that was fine-tuned on the English version of the standard CoNLL-2003 Named Entity Recognition dataset.
If you'd like to use a larger BERT-large model fine-tuned on the same dataset, a bert-large-NER version is also available.
# How to Use
You can use this model with Transformers pipeline for NER.
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Hatman/bert-finetuned-ner")
model = AutoModelForTokenClassification.from_pretrained("Hatman/bert-finetuned-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Wolfgang and I live in Berlin"
ner_results = nlp(example)
print(ner_results)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0181 | 1.0 | 1756 | 0.1301 |
| 0.0166 | 2.0 | 3512 | 0.0762 |
| 0.0064 | 3.0 | 5268 | 0.0814 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
dbal0503/S5 | dbal0503 | 2023-01-31T15:53:55Z | 0 | 1 | null | [
"en",
"arxiv:2208.04933",
"arxiv:2111.00396",
"region:us"
]
| null | 2023-01-31T15:47:37Z | ---
language:
- en
---
# S5: Simplified State Space Layers for Sequence Modeling
This repository provides the implementation for the
paper: Simplified State Space Layers for Sequence Modeling. The preprint is available [here](https://arxiv.org/abs/2208.04933).

<p style="text-align: center;">
Figure 1: S5 uses a single multi-input, multi-output linear state-space model, coupled with non-linearities, to define a non-linear sequence-to-sequence transformation. Parallel scans are used for efficient offline processing.
</p>
The S5 layer builds on the prior S4 work ([paper](https://arxiv.org/abs/2111.00396)). While it has departed considerably, this repository originally started off with much of the JAX implementation of S4 from the
Annotated S4 blog by Rush and Karamcheti (available [here](https://github.com/srush/annotated-s4)).
## Requirements & Installation
To run the code on your own machine, run either `pip install -r requirements_cpu.txt` or `pip install -r requirements_gpu.txt`. The GPU installation of JAX can be tricky, and so we include requirements that should work for most people, although further instructions are available [here](https://github.com/google/jax#installation).
Run from within the root directory `pip install -e .` to install the package.
## Data Download
Downloading the raw data is done differently for each dataset. The following datasets require no action:
- Text (IMDb)
- Image (Cifar black & white)
- sMNIST
- psMNIST
- Cifar (Color)
The remaining datasets need to be manually downloaded. To download _everything_, run `./bin/download_all.sh`. This will download quite a lot of data and will take some time.
Below is a summary of the steps for each dataset:
- ListOps: run `./bin/download_lra.sh` to download the full LRA dataset.
- Retrieval (AAN): run `./bin/download_aan.sh`
- Pathfinder: run `./bin/download_lra.sh` to download the full LRA dataset.
- Path-X: run `./bin/download_lra.sh` to download the full LRA dataset.
- Speech commands 35: run `./bin/download_sc35.sh` to download the speech commands data.
*With the exception of SC35.* When the dataset is used for the first time, a cache is created in `./cache_dir`. Converting the data (e.g. tokenizing) can be quite slow, and so this cache contains the processed dataset. The cache can be moved and specified with the `--dir_name` argument (i.e. the default is `--dir_name=./cache_dir`) to avoid applying this preprocessing every time the code is run somewhere new.
SC35 is slightly different. SC35 doesn't use `--dir_name`, and instead requires that the following path exists: `./raw_datasets/speech_commands/0.0.2/SpeechCommands` (i.e. the directory `./raw_datasets/speech_commands/0.0.2/SpeechCommands/zero` must exist). The cache is then stored in `./raw_datasets/speech_commands/0.0.2/SpeechCommands/processed_data`. This directory can then be copied (preserving the directory path) to move the preprocessed dataset to a new location.
## Repository Structure
Directories and files that ship with GitHub repo:
```
s5/ Source code for models, datasets, etc.
dataloading.py Dataloading functions.
layers.py Defines the S5 layer which wraps the S5 SSM with nonlinearity, norms, dropout, etc.
seq_model.py Defines deep sequence models that consist of stacks of S5 layers.
ssm.py S5 SSM implementation.
ssm_init.py Helper functions for initializing the S5 SSM .
train.py Training loop code.
train_helpers.py Functions for optimization, training and evaluation steps.
dataloaders/ Code mainly derived from S4 processing each dataset.
utils/ Range of utility functions.
bin/ Shell scripts for downloading data and running example experiments.
requirements_cpu.txt Requirements for running in CPU mode (not advised).
requirements_gpu.txt Requirements for running in GPU mode (installation can be highly system-dependent).
run_train.py Training loop entrypoint.
```
Directories that may be created on-the-fly:
```
raw_datasets/ Raw data as downloaded.
cache_dir/ Precompiled caches of data. Can be copied to new locations to avoid preprocessing.
wandb/ Local WandB log files.
```
## Experiments
The configurations to run the LRA and 35-way Speech Commands experiments from the paper are located in `bin/run_experiments`. For example,
to run the LRA text (character level IMDB) experiment, run `./bin/run_experiments/run_lra_imdb.sh`.
To log with W&B, adjust the default `USE_WANDB, wandb_entity, wandb_project` arguments.
Note: the pendulum
regression dataloading and experiments will be added soon.
## Citation
Please use the following when citing our work:
```
@misc{smith2022s5,
doi = {10.48550/ARXIV.2208.04933},
url = {https://arxiv.org/abs/2208.04933},
author = {Smith, Jimmy T. H. and Warrington, Andrew and Linderman, Scott W.},
keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Simplified State Space Layers for Sequence Modeling},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
Please reach out if you have any questions.
-- The S5 authors.
|
zuzhe/Ancient-Chinese-head-portrait | zuzhe | 2023-01-31T15:50:44Z | 0 | 19 | null | [
"license:openrail",
"region:us"
]
| null | 2023-01-31T05:47:24Z | ---
license: openrail
---
Antique head portrait needs low cfg, such as 3.5-7
The main reason is that some people use similar models to monopolize and make money. I think AI should follow the spirit of Internet sharing
I will gradually share some models of Chinese style and love Chinese style in the future. I thank QQ friends for their long-term help and teaching. Thank you again |
belalelseraty/ppo-LunarLander | belalelseraty | 2023-01-31T15:50:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-31T15:50:09Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.30 +/- 21.03
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hxssgaa/dstc_setfit_deberta | hxssgaa | 2023-01-31T15:41:33Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"deberta-v2",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-01-31T15:41:13Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 10240 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 20480,
"warmup_steps": 2048,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DebertaV2Model
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Liapunov/ppo-SnowballTargetFR | Liapunov | 2023-01-31T15:39:36Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-01-31T15:39:30Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Liapunov/ppo-SnowballTargetFR
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
davanstrien/testwebhook | davanstrien | 2023-01-31T15:25:21Z | 0 | 1 | diffusers | [
"diffusers",
"legal",
"document-question-answering",
"en",
"dataset:pile-of-law/pile-of-law",
"co2_eq_emissions",
"region:us"
]
| document-question-answering | 2023-01-21T15:49:15Z | ---
datasets:
- pile-of-law/pile-of-law
metrics:
- accuracy
- abdusah/aradiawer
pipeline_tag: document-question-answering
tags:
- legal
co2_eq_emissions:
emissions: 0.2345
language:
- en
library_name: diffusers
--- |
Subsets and Splits