modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 06:27:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 06:23:06
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
YouCarryOats/Qtable_Taxi-v3 | YouCarryOats | 2023-06-04T20:43:09Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T20:43:07Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Qtable_Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="YouCarryOats/Qtable_Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Actuary/rl_course_vizdoom_health_gathering_supreme | Actuary | 2023-06-04T20:22:40Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T20:15:44Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.64 +/- 4.81
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Actuary/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
lprat/wiki_qa_model | lprat | 2023-06-04T20:21:36Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-04T16:54:20Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: wiki_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# wiki_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 124, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
platzi/platzi-vit-model-santiago | platzi | 2023-06-04T19:32:46Z | 191 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-04T19:23:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: platzi-vit-model-santiago
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.48120300751879697
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-santiago
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0180
- Accuracy: 0.4812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0972 | 3.85 | 500 | 1.0180 | 0.4812 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
coreychambers/entartst | coreychambers | 2023-06-04T19:17:38Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2023-06-04T19:09:12Z | ---
license: openrail
---
---
language: english
tags:
- sentiment-analysis
- distilbert
---
# Sentiment Analysis with DistilBert
This model is a fine-tuned version of DistilBert for sentiment analysis on the IMDB dataset.
## How the model was trained
This model was trained using the Hugging Face Transformers library. It was trained for 1 epoch on the IMDB dataset, with a batch size of 4 and a learning rate of 5e-5.
## How to use
You can use this model for sentiment analysis. Here's an example of how to do this in Python:
```python
from transformers import pipeline
# Load the model
classifier = pipeline('sentiment-analysis', model='your-model-name')
# Classify some text
result = classifier("I love this movie!")[0]
print(f"label: {result['label']}, with score: {result['score']:.4f}")
|
jcr987/dummi | jcr987 | 2023-06-04T18:34:57Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2023-06-04T17:25:04Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KJangam/amznproducts | KJangam | 2023-06-04T18:25:10Z | 65 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-04T13:14:48Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: KJangam/amznproducts
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KJangam/amznproducts
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3855
- Validation Loss: 0.7152
- Train Accuracy: 0.759
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 12500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.9536 | 0.7843 | 0.732 | 0 |
| 0.5358 | 0.6823 | 0.7616 | 1 |
| 0.3855 | 0.7152 | 0.759 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
chereddy/ppo-LunarLander-v2 | chereddy | 2023-06-04T18:18:07Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-30T01:59:15Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.47 +/- 23.68
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Neerajvibez/q-FrozenLake-v1-4x4-noSlippery | Neerajvibez | 2023-06-04T18:13:57Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T17:26:00Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Neerajvibez/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
KostiuchenkoArtem/bart_large_multi_modified | KostiuchenkoArtem | 2023-06-04T17:54:43Z | 69 | 0 | transformers | [
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"summarization",
"en",
"dataset:multi_news",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-05-27T11:37:15Z | ---
license: mit
tags:
- generated_from_keras_callback
- summarization
model-index:
- name: KostiuchenkoArtem/bart_large_multi_modified
results: []
datasets:
- multi_news
language:
- en
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KostiuchenkoArtem/bart_large_multi_modified
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on Multi-News dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8945
- Validation Loss: 2.1223
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.2231 | 2.1476 | 0 |
| 1.8945 | 2.1223 | 1 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3 |
Hellraiser24/git-checkpoint | Hellraiser24 | 2023-06-04T17:31:49Z | 61 | 0 | transformers | [
"transformers",
"pytorch",
"git",
"image-text-to-text",
"generated_from_trainer",
"dataset:textvqa",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-06-04T16:51:40Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- textvqa
model-index:
- name: git-checkpoint
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-checkpoint
This model is a fine-tuned version of [microsoft/git-base-textvqa](https://huggingface.co/microsoft/git-base-textvqa) on the textvqa dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
asenella/mmnist_JMVAEconfig_resnet_seed_0_ratio_05_c | asenella | 2023-06-04T17:25:11Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-04T17:24:53Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
pigeon01/sungju_finetuned-ko-to-en_ver3 | pigeon01 | 2023-06-04T17:17:03Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"longt5",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-06-04T16:50:05Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: sungju_finetuned-ko-to-en_ver3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sungju_finetuned-ko-to-en_ver3
This model is a fine-tuned version of [KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-ko2en](https://huggingface.co/KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-ko2en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0946
- Bleu: 28.4700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Actuary/dqn-SpaceInvadersNoFrameskip-v4 | Actuary | 2023-06-04T16:53:15Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T16:52:35Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 724.50 +/- 373.07
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Actuary -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Actuary -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Actuary
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
laihuiyuan/MMFLD | laihuiyuan | 2023-06-04T16:51:10Z | 103 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"arxiv:2306.00121",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-04T10:31:28Z | ---
language:
- en
license: apache-2.0
---
# Paper
This is a mT5-based model for multilingual multi-figurative language detection, which includes three figures of speech (hyperbole, idiom, and metaphor), and seven languages (English:EN, Chinese:ZH, Ger- man:DE, Spanish:ES, Italian:IT, Farsi:FA, and Russian:RU). It was introduced in the paper [Multilingual Multi-Figurative Language Detection](https://arxiv.org/abs/2306.00121).
# Abstract
Figures of speech help people express abstract concepts and evoke stronger emotions than literal expressions, thereby making texts more creative and engaging. Due to its pervasive and fundamental character, figurative language understanding has been addressed in Natural Language Processing, but it's highly understudied in a multilingual setting and when considering more than one figure of speech at the same time. To bridge this gap, we introduce multilingual multi-figurative language modelling, and provide a benchmark for sentence-level figurative language detection, covering three common figures of speech and seven languages. Specifically, we develop a framework for figurative language detection based on template-based prompt learning. In so doing, we unify multiple detection tasks that are interrelated across multiple figures of speech and languages, without requiring task- or language-specific modules. Experimental results show that our framework outperforms several strong baselines and may serve as a blueprint for the joint modelling of other interrelated tasks.
# How to use
```python
from transformers import MT5TokenizerFast, MT5ForConditionalGeneration
tokenizer = MT5TokenizerFast.from_pretrained('laihuiyuan/MMFLD')
model = MT5ForConditionalGeneration.from_pretrained('laihuiyuan/MMFLD')
prompt = 'Which figure of speech does this text contain? (A) Literal. (B) {}. | Text: {}'
task = 'Idiom' # Hyperbole and Metaphor are also supported
text = 'This is a perfect way to break the ice and start the conversation.'
inputs = prompt.format(task, text)
inputs = tokenizer(inputs, return_tensors="pt")
output = model.generate(**inputs, num_beams=5, max_length=10)
pred = tokenizer.decode(output[0].tolist(), skip_special_tokens=True, clean_up_tokenization_spaces=False)
```
# Citation Info
```BibTeX
@inproceedings{lai-etal-2023-multi,
title = "Multilingual Multi-Figurative Language Detection",
author = "Lai, Huiyuan and Toral, Antonio and Nissim, Malvina",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
month = July,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
}
``` |
peteozegov/Pixelcopter-PLE-v0 | peteozegov | 2023-06-04T16:47:00Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T16:46:58Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.40 +/- 12.99
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
SotirisLegkas/final_socratic_dialoGPT | SotirisLegkas | 2023-06-04T16:46:26Z | 97 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-04T15:51:27Z | ---
pipeline_tag: conversational
--- |
hhyxnh/WLOP-style_stable_diffusion-heywhale | hhyxnh | 2023-06-04T16:35:02Z | 51 | 26 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"diffusion-models-class",
"dreambooth-hackathon",
"wildcard",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-07T11:48:03Z | ---
tags:
- text-to-image
- stable-diffusion
- diffusion-models-class
- dreambooth-hackathon
- wildcard
---
# If you want to use this style , the prompt: wgz style
**I trianed this weight by using dreambooth**
There are three models in the library, V1, V2, and beta
V1:WOLP.ckpt
V2:wgz v2.ckpt
BEAT :wgz beta.ckpt
Let me introduce these versions ,
V1 is the earliest. use 512 resolution for training,
Advantages: the generation logic is good, text2img is used well
Disadvantages: There will be less screen details and less imitation of strokes. The whites of the eyes turn red easily.
V2 adjusted the training set and used 768 resolution training
Advantages: more details, better facial portrayal
Disadvantages: Eyeballs tend to generate green, and the logic of text generation is not good, resulting in poor usability of text2img.
BETA: A version of the training map that is relatively complex has poor logic, but sometimes satisfactory results can be obtained in the img2 img. Try if you want!
**v2**
Next, I will show the nice output of V2, which is made by using the img2img. I will also give the configuration I use.
**parameters;**
Prompt :wgz style,portrait of a beautiful women highly detailed,perfect femine face,Classical oil painting,by masamune shirow, by William-tae Kim
Negative prompt: old,men,two poeple,Three dimensional facial features,deep shadows on the five senses,tall nose,Eye shadow,[High bridge], [narrow nose],Deep orbital fossa
Steps: 20, Sampler: DPM++ 2S a Karras, CFG scale: 9.5, Denoising strength: 0.51, Mask blur: 4
I hope you guys enjoy this model and create works that satisfied

**BETA**
Below I will show the nice output of a beta. I like its abstract color swatches, but the production rate is not high

I hope you guys enjoy this model and create works that satisfied
Below is the introduction about V1,and some nice output by using V1
**v1**
--------------------------
Training with the Elden ring Style ckpt (SD1.5)
# link:[eldenring-ckpt](https://huggingface.co/nitrosocke/elden-ring-diffusion)
# Training parameters:
steps: 12500 stpes
Learning rate:1e-6
int images number:about 100
class image number :200
**There are some samples:**


PS:I try hard to achieve the painting style I want by using stable diffusion. I have tried a lot, with failures and successes. With my current technology, I can hardly find progress now. So this model will not be updated in a short time. If you have ideas or want to improve together, you can contact me,happy new year guys.
Telegram: https://t.me/hhhyxnh
QQ:1602821649
** More pictures in the image folder
thank you. |
uaritm/multilingual_en_uk_pl_ru | uaritm | 2023-06-04T16:34:24Z | 85,968 | 2 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers - multilingual - en - ru - uk - pl",
"uk",
"en",
"pl",
"ru",
"dataset:Helsinki-NLP/tatoeba_mt",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-05-12T19:12:27Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers - multilingual - en - ru - uk - pl
license: apache-2.0
datasets:
- Helsinki-NLP/tatoeba_mt
metrics:
- mse
language:
- uk
- en
- pl
- ru
library_name: sentence-transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
The model is used on the resource of multilingual analysis of patient complaints to determine the specialty of the doctor that is needed in this case: [Virtual General Practice](https://aihealth.site)
You can test the quality and speed of the model
This model is an updated version of the model: [uaritm/multilingual_en_ru_uk](https://huggingface.co/uaritm/multilingual_en_ru_uk)
```
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 50184 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```
@misc{Uaritm,
title={sentence-transformers: Semantic similarity of medical texts},
author={Vitaliy Ostashko},
year={2023},
url={https://aihealth.site},
}
```
<!--- Describe where people can find more information --> |
Broszkit/shirafazleen | Broszkit | 2023-06-04T16:31:05Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-04T16:30:16Z | ---
license: creativeml-openrail-m
---
|
asenella/mmnist_MoPoEconfig_resnet_seed_0_ratio_02_c | asenella | 2023-06-04T16:27:09Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-04T16:26:15Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
uaritm/multilingual_en_ru_uk | uaritm | 2023-06-04T16:27:03Z | 19 | 4 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"multilingual",
"en",
"ru",
"uk",
"pl",
"dataset:ted_multi",
"dataset:Helsinki-NLP/tatoeba_mt",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-09-22T06:33:04Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- multilingual
- en
- ru
- uk
license: apache-2.0
datasets:
- ted_multi
- Helsinki-NLP/tatoeba_mt
language:
- uk
- en
- pl
- ru
metrics:
- mse
library_name: sentence-transformers
---
# uaritm/multilingual_en_ru_uk
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
A newer version of this model that adds Polish is available here: [uaritm/multilingual_en_uk_pl_ru](https://huggingface.co/uaritm/multilingual_en_uk_pl_ru)
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
The model is used on the resource of multilingual analysis of patient complaints to determine the specialty of the doctor that is needed in this case: [Virtual General Practice](https://aihealth.site)
You can test the quality and speed of the model
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('uaritm/multilingual_en_ru_uk')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('uaritm/multilingual_en_ru_uk')
model = AutoModel.from_pretrained('uaritm/multilingual_en_ru_uk')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=uaritm/multilingual_en_ru_uk)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 17482 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```
@misc{Uaritm,
title={sentence-transformers: Semantic similarity of medical texts},
author={Vitaliy Ostashko},
year={2022},
url={https://aihealth.site},
}
```
<!--- Describe where people can find more information --> |
casals90/a2c-AntBulletEnv-v0 | casals90 | 2023-06-04T16:16:15Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T16:13:28Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1921.75 +/- 117.38
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rnosov/airoboros-7b-gpt4-sharded | rnosov | 2023-06-04T16:07:39Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-04T14:50:22Z | Resharded version of https://huggingface.co/jondurbin/airoboros-7b-gpt4 for low RAM enviroments ( Colab, Kaggle etc ) |
Bonosa2/cartpole | Bonosa2 | 2023-06-04T15:30:53Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T15:28:13Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Sav8316/Jarvis | Sav8316 | 2023-06-04T15:29:46Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-04T15:29:46Z | ---
license: creativeml-openrail-m
---
|
laihuiyuan/DRS-LMM | laihuiyuan | 2023-06-04T15:21:50Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"en",
"arxiv:2306.00124",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-01-24T11:14:43Z | ---
language:
- en
license: apache-2.0
---
# Paper
This is an mBART-based model that can be used for both multilingual DRS parsing and DRS-to-text generation, covering four languages (English:EN, German:DE,
Italian:IT, Dutch:NL). It is introduced in the paper [Pre-Trained Language-Meaning Models for Multilingual Parsing and Generation](https://arxiv.org/abs/2306.00124).
# Abstract
Pre-trained language models (PLMs) have achieved great success in NLP and have recently been used for tasks in computational semantics. However, these tasks do not fully benefit from PLMs since meaning representations are not explicitly included in the pre-training stage. We introduce multilingual pre-trained language-meaning models based on Discourse Representation Structures (DRSs), including meaning representations besides natural language texts in the same model, and design a new strategy to reduce the gap between the pre-training and fine-tuning objectives. Since DRSs are language neutral, cross-lingual transfer learning is adopted to further improve the performance of non-English tasks. Automatic evaluation results show that our approach achieves the best performance on both the multilingual DRS parsing and DRS-to-text generation tasks. Correlation analysis between automatic metrics and human judgements on the generation task further validates the effectiveness of our model. Human inspection reveals that out-of-vocabulary tokens are the main cause of erroneous results.
# How to use
```bash
git clone https://github.com/wangchunliu/DRS-pretrained-LMM.git
cd DRS-pretrained-LMM
```
```python
# a case of drs-text generation
from tokenization_mlm import MLMTokenizer
from transformers import MBartForConditionalGeneration
# For DRS parsing, src_lang should be set to en_XX, de_DE, it_IT, or nl_XX
tokenizer = MLMTokenizer.from_pretrained('laihuiyuan/DRS-LMM', src_lang='<drs>')
model = MBartForConditionalGeneration.from_pretrained('laihuiyuan/DRS-LMM')
# gold text: The court is adjourned until 3:00 p.m. on March 1st.
inp_ids = tokenizer.encode(
"court.n.01 time.n.08 EQU now adjourn.v.01 Theme -2 Time -1 Finish +1 time.n.08 ClockTime 15:00 MonthOfYear 3 DayOfMonth 1",
return_tensors="pt")
# For DRS parsing, the forced bos token here should be <drs>
foced_ids = tokenizer.encode("en_XX", add_special_tokens=False, return_tensors="pt")
outs = model.generate(input_ids=inp_ids, forced_bos_token_id=foced_ids.item(), num_beams=5, max_length=150)
text = tokenizer.decode(outs[0].tolist(), skip_special_tokens=True, clean_up_tokenization_spaces=False)
```
# Citation Info
```BibTeX
@inproceedings{wang-etal-2023-pre,
title = "Pre-Trained Language-Meaning Models for Multilingual Parsing and Generation",
author = "Wang, Chunliu and Lai, Huiyuan and Nissim, Malvina and Bos, Johan",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
month = July,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
}
``` |
minosu/godot_dodo_4x_60k_starcoder_15b_2ep_ggml_q4_1 | minosu | 2023-06-04T15:05:51Z | 0 | 6 | null | [
"region:us"
] | null | 2023-06-04T10:09:38Z | # godot_dodo_4x_60k_starcoder_15b_2ep_ggml_q4_1
## Model details
This is a 4-bit quantized ggml conversion of [minosu/godot_dodo_4x_60k_starcoder_15b_2ep](https://huggingface.co/minosu/godot_dodo_4x_60k_starcoder_15b_2ep) for use with [ggml](https://github.com/ggerganov/ggml).
Trained in May 2023.
Godot-Dodo models are instruction-following models finetuned from open-source base models.
Please refer to the README of the [GitHub repository](https://github.com/minosvasilias/godot-dodo) for detailed information.
### Evaluation datasets
The model was evaluated using code instruction prompts. More details in the [GitHub repository](https://github.com/minosvasilias/godot-dodo).
### Training dataset
The model was trained on a 60k rows instruction following dataset, which is released in the [Github repository](https://github.com/minosvasilias/godot-dodo).
### Training parameters
For exact parameters used, please refer to [this page](https://github.com/minosvasilias/godot-dodo/tree/main/models/godot_dodo_4x_60k_starcoder_15b_2ep) in the GitHub repository.
|
k3nshi/ppo-LunarLander-v2 | k3nshi | 2023-06-04T15:01:00Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T15:00:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.64 +/- 18.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Broszkit/irafazleen | Broszkit | 2023-06-04T14:59:04Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-04T14:56:59Z | ---
license: creativeml-openrail-m
---
|
pandas2002/t5_base | pandas2002 | 2023-06-04T14:29:19Z | 0 | 0 | fairseq | [
"fairseq",
"medical",
"summarization",
"en",
"dataset:ccdv/pubmed-summarization",
"license:afl-3.0",
"region:us"
] | summarization | 2023-06-04T14:27:09Z | ---
license: afl-3.0
datasets:
- ccdv/pubmed-summarization
language:
- en
metrics:
- rouge
library_name: fairseq
pipeline_tag: summarization
tags:
- medical
--- |
nolanaatama/ghstmx | nolanaatama | 2023-06-04T14:21:28Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-04T14:13:15Z | ---
license: creativeml-openrail-m
---
|
akira1608/bart-1 | akira1608 | 2023-06-04T14:13:46Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-03T15:10:33Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-1
This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yeeunlee/long-ke-t5-base-translation-aihub-ko2en-finetuned | yeeunlee | 2023-06-04T13:57:38Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"longt5",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-06-04T12:16:40Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: long-ke-t5-base-translation-aihub-ko2en-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# long-ke-t5-base-translation-aihub-ko2en-finetuned
This model is a fine-tuned version of [KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-ko2en](https://huggingface.co/KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-ko2en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0187
- Bleu: 33.4742
- Gen Len: 11.2914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
asenella/mmnist_JNFconfig_resnet_seed_0_ratio_05_c | asenella | 2023-06-04T13:55:10Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-04T13:54:49Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
thefrigidliquidation/nllb-jaen-1.3B-lightnovels | thefrigidliquidation | 2023-06-04T13:38:43Z | 46 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"m2m_100",
"text2text-generation",
"nllb",
"en",
"ja",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-10-01T00:43:59Z | ---
language:
- en
- ja
tags:
- nllb
license: cc-by-nc-4.0
---
# NLLB 1.3B fine-tuned on Japanese to English Light Novel translation
This model was fine-tuned on light and web novel for Japanese to English translation.
It can translate sentences and paragraphs up to 512 tokens.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("thefrigidliquidation/nllb-jaen-1.3B-lightnovels")
model = AutoModelForSeq2SeqLM.from_pretrained("thefrigidliquidation/nllb-jaen-1.3B-lightnovels")
generated_tokens = model.generate(
**inputs,
forced_bos_token_id=tokenizer.lang_code_to_id[tokenizer.tgt_lang],
max_new_tokens=1024,
no_repeat_ngram_size=6,
).cpu()
translated_text = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
```
Generating with diverse beam search seems to work best. Add the following to `model.generate`:
```python
num_beams=8,
num_beam_groups=4,
do_sample=False,
```
## Glossary
You can provide up to 10 custom translations for nouns and character names at runtime. To do so, surround the Japanese term with term tokens. Prefix the word with one of `<t0>, <t1>, ..., <t9>` and suffix the word with `</t>`. The term will be translated as the prefix term token which can then be string replaced.
For example, in `マイン、ルッツが迎えに来たよ` if you wish to have `マイン` translated as `Myne` you would replace `マイン` with `<t0>マイン</t>`. The model will translate `<t0>マイン</t>、ルッツが迎えに来たよ` as `<t0>, Lutz is here to pick you up.` Then simply do a string replacement on the output, replacing `<t0>` with `Myne`.
## Honorifics
You can force the model to generate or ignore honorifics.
```python
# default, the model decides whether to use honorifics
tokenizer.tgt_lang = "jpn_Jpan"
# no honorifics, the model is discouraged from using honorifics
tokenizer.tgt_lang = "zsm_Latn"
# honorifics, the model is encouraged to use honorifics
tokenizer.tgt_lang = "zul_Latn"
```
|
Broszkit/shahirafazleen | Broszkit | 2023-06-04T13:27:29Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-04T13:21:54Z | ---
license: creativeml-openrail-m
---
|
AriannaHeartbell/AH_DL_assignment | AriannaHeartbell | 2023-06-04T13:24:37Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2023-06-04T13:20:46Z | ---
license: openrail
---
https://github.com/AriannaHeartbell/AHKNUARIN702001_2023S1
order:lr+seed
0.008000_0
0.007000_1
0.007000_2
0.006000_3
0.006000_4
0.006000_5
0.006000_6 |
asenella/mmnist_MVTCAEconfig_resnet_seed_0_ratio_02_c | asenella | 2023-06-04T13:21:09Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-04T13:20:33Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Lakshit11/ppo_LunarLander-v2 | Lakshit11 | 2023-06-04T12:41:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T12:41:11Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.00 +/- 61.99
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
asenella/mmnist_MVAEconfig_resnet_seed_0_ratio_05_c | asenella | 2023-06-04T12:40:37Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-04T12:40:01Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Nonroothp/Tescoba | Nonroothp | 2023-06-04T12:37:44Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-04T12:37:44Z | ---
license: creativeml-openrail-m
---
|
jayanta/distilbert-base-uncased-english-sentweet-sentiment | jayanta | 2023-06-04T12:33:01Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-04T12:26:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-base-uncased-english-sentweet-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-english-sentweet-sentiment
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2724
- Accuracy: 0.7674
- Precision: 0.7706
- Recall: 0.7708
- F1: 0.7674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 81 | 0.4377 | 0.7986 | 0.8152 | 0.8064 | 0.7980 |
| No log | 2.0 | 162 | 0.5264 | 0.7847 | 0.8029 | 0.7929 | 0.7839 |
| No log | 3.0 | 243 | 0.7576 | 0.7674 | 0.7706 | 0.7708 | 0.7674 |
| No log | 4.0 | 324 | 1.0652 | 0.7465 | 0.7463 | 0.7475 | 0.7462 |
| No log | 5.0 | 405 | 1.2214 | 0.7431 | 0.7431 | 0.7442 | 0.7427 |
| No log | 6.0 | 486 | 1.2724 | 0.7674 | 0.7706 | 0.7708 | 0.7674 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.6.1
- Tokenizers 0.11.0
|
yeeunlee/opus-mt-ko-en-finetuned | yeeunlee | 2023-06-04T12:15:28Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-06-04T10:02:39Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ko-en-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ko-en-finetuned
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6795
- Bleu: 51.1268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ashcher51/ppo-LunarLander-v2 | ashcher51 | 2023-06-04T12:07:25Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T12:07:05Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.76 +/- 17.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
asenella/mmnist_MoPoEconfig_resnet_seed_0_ratio_05_c | asenella | 2023-06-04T11:59:21Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-04T10:43:22Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/mmnist_MVTCAEconfig_resnet_seed_0_ratio_05_c | asenella | 2023-06-04T11:32:21Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-04T10:22:20Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
abokbot/wikipedia-embedding | abokbot | 2023-06-04T11:22:33Z | 0 | 2 | sentence-transformers | [
"sentence-transformers",
"bi-coder",
"MSMARCO",
"en",
"dataset:abokbot/wikipedia-first-paragraph",
"region:us"
] | null | 2023-06-04T08:58:06Z | ---
datasets:
- abokbot/wikipedia-first-paragraph
language:
- en
library_name: sentence-transformers
tags:
- bi-coder
- MSMARCO
---
# Description
We use MS Marco Encoder msmarco-MiniLM-L-6-v3 from the sentence-transformers library to encode the text from dataset [abokbot/wikipedia-first-paragraph](https://huggingface.co/datasets/abokbot/wikipedia-first-paragraph).
The dataset contains the first paragraphs of the English "20220301.en" version of the [Wikipedia dataset](https://huggingface.co/datasets/wikipedia).
The output is an embedding tensor of size [6458670, 384].
# Code
It was obtained by running the following code.
```python
from datasets import load_dataset
from sentence_transformers import SentenceTransformer
dataset = load_dataset("abokbot/wikipedia-first-paragraph")
bi_encoder = SentenceTransformer('msmarco-MiniLM-L-6-v3')
bi_encoder.max_seq_length = 256
wikipedia_embedding = bi_encoder.encode(dataset["text"], convert_to_tensor=True, show_progress_bar=True)
```
This operation took 35min on a Google Colab notebook with GPU.
# Reference
More information of MS Marco encoders here https://www.sbert.net/docs/pretrained-models/ce-msmarco.html |
NickThe1/luna | NickThe1 | 2023-06-04T11:20:43Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T11:20:38Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -130.67 +/- 122.16
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'NickThe1/luna'
'batch_size': 512
'minibatch_size': 128}
```
|
NickThe1/rl_course_vizdoom_health_gathering_supreme | NickThe1 | 2023-06-04T11:12:11Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T11:12:04Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.09 +/- 4.55
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r NickThe1/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
psetinek/ppo-LunarLander-base | psetinek | 2023-06-04T10:58:07Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T10:57:50Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.28 +/- 13.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Lolimorimorf/ru_propaganda_opposition_and_neutral_class_model | Lolimorimorf | 2023-06-04T10:56:36Z | 62 | 1 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-04T10:55:25Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: ru_propaganda_opposition_and_neutral_class_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ru_propaganda_opposition_and_neutral_class_model
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0003
- Validation Loss: 0.2057
- Train Accuracy: 0.9641
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 20205, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2750 | 0.1328 | 0.9508 | 0 |
| 0.0855 | 0.1143 | 0.9599 | 1 |
| 0.0384 | 0.1684 | 0.9574 | 2 |
| 0.0284 | 0.1341 | 0.9633 | 3 |
| 0.0182 | 0.1238 | 0.9641 | 4 |
| 0.0107 | 0.2324 | 0.9558 | 5 |
| 0.0098 | 0.2588 | 0.9416 | 6 |
| 0.0056 | 0.1881 | 0.9599 | 7 |
| 0.0098 | 0.1994 | 0.9566 | 8 |
| 0.0016 | 0.1788 | 0.9599 | 9 |
| 0.0021 | 0.1861 | 0.9633 | 10 |
| 0.0004 | 0.1889 | 0.9666 | 11 |
| 0.0004 | 0.1988 | 0.9674 | 12 |
| 0.0005 | 0.1852 | 0.9708 | 13 |
| 0.0003 | 0.2057 | 0.9641 | 14 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
VuDucQuang/cartoon-character-style | VuDucQuang | 2023-06-04T10:51:31Z | 32 | 3 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-04T10:45:46Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Cartoon-Character-Style Dreambooth model trained by VuDucQuang with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
temporary0-0name/Pixelcopter-PLE-v0 | temporary0-0name | 2023-06-04T10:25:11Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-03T07:39:20Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 17.10 +/- 14.52
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
shamiulshifat/q-FrozenLake-v1-4x4-noSlippery | shamiulshifat | 2023-06-04T10:08:57Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T10:08:53Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="shamiulshifat/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Salavat/nllb-200-distilled-600M-finetuned-isv_v2 | Salavat | 2023-06-04T09:53:41Z | 90 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-02T19:29:46Z | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nllb-200-distilled-600M-finetuned-isv_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-200-distilled-600M-finetuned-isv_v2
This model is a fine-tuned version of [Salavat/nllb-200-distilled-600M-finetuned-isv_v2](https://huggingface.co/Salavat/nllb-200-distilled-600M-finetuned-isv_v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0552
- Bleu: 9.1195
- Gen Len: 11.252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 1.5967 | 1.0 | 8000 | 3.0552 | 9.1195 | 11.252 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Edvent/t5-end2end-questions-generation | Edvent | 2023-06-04T09:49:11Z | 160 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-04T07:35:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5733 | 0.34 | 100 | 1.9072 |
| 1.9659 | 0.68 | 200 | 1.7279 |
| 1.8436 | 1.02 | 300 | 1.6666 |
| 1.7433 | 1.35 | 400 | 1.6389 |
| 1.7143 | 1.69 | 500 | 1.6149 |
| 1.6904 | 2.03 | 600 | 1.6086 |
| 1.6305 | 2.37 | 700 | 1.5930 |
| 1.6268 | 2.71 | 800 | 1.5896 |
| 1.6151 | 3.05 | 900 | 1.5926 |
| 1.5712 | 3.39 | 1000 | 1.5857 |
| 1.5671 | 3.73 | 1100 | 1.5736 |
| 1.5518 | 4.06 | 1200 | 1.5784 |
| 1.5372 | 4.4 | 1300 | 1.5825 |
| 1.5244 | 4.74 | 1400 | 1.5702 |
| 1.5178 | 5.08 | 1500 | 1.5708 |
| 1.4954 | 5.42 | 1600 | 1.5712 |
| 1.4866 | 5.76 | 1700 | 1.5692 |
| 1.5027 | 6.1 | 1800 | 1.5685 |
| 1.4778 | 6.44 | 1900 | 1.5712 |
| 1.477 | 6.77 | 2000 | 1.5681 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ZuWen/Kafka | ZuWen | 2023-06-04T09:32:16Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-04T09:17:10Z | ---
license: creativeml-openrail-m
---
|
Hans14/PyramidsDRL | Hans14 | 2023-06-04T09:30:28Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-06-04T09:30:22Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: Hans14/PyramidsDRL
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jucamohedano/ppo-PyramidsRND | jucamohedano | 2023-06-04T09:26:09Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-06-04T09:25:09Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: jucamohedano/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
edmagall/hate_speech_offensive | edmagall | 2023-06-04T09:23:00Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2023-06-04T09:22:43Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
masterpen/distilgpt2-finetuned-wikitext2 | masterpen | 2023-06-04T08:46:14Z | 170 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-04T07:57:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.76 | 1.0 | 2334 | 3.6658 |
| 3.6526 | 2.0 | 4668 | 3.6468 |
| 3.6004 | 3.0 | 7002 | 3.6425 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
camenduru/one-shot-talking-face-20.04-a10 | camenduru | 2023-06-04T08:41:34Z | 0 | 6 | null | [
"arxiv:2112.02749",
"region:us"
] | null | 2023-06-04T08:41:08Z | # One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022)
#### [Paper](https://arxiv.org/pdf/2112.02749.pdf) | [Demo](https://www.youtube.com/watch?v=HHj-XCXXePY)
#### Requirements
- Python >= 3.6 , Pytorch >= 1.8 and ffmpeg
- Set up [OpenFace](https://github.com/TadasBaltrusaitis/OpenFace)
- We use the OpenFace tools to extract the initial pose of the reference image
- Make sure you have installed this tool, and set the `OPENFACE_POSE_EXTRACTOR_PATH` in `config.py`. For example, it should be the absolute path of the "`FeatureExtraction.exe`" for Windows.
- Other requirements are listed in the 'requirements.txt'
#### Pretrained Checkpoint
Please download the pretrained checkpoint from [google-drive](https://drive.google.com/file/d/1mjFEozPR_2vMaVRMd9Agk_sU1VaiUYMl/view?usp=sharing) and unzip it to the directory (`/checkpoints`). Or manually modify the settings of `GENERATOR_CKPT` and `AUDIO2POSE_CKPT` in the `config.py`.
#### Extract phoneme
We employ the [CMU phoneset](https://github.com/cmusphinx/cmudict) to represent phonemes, the extra 'SIL' means silence. All the phonesets can be seen in '`phindex.json`'.
We have extracted the phonemes for the audios in the '`sample/audio`' directory. For other audios, you can extract the phonemes by other ASR tools and then map them to the CMU phoneset. Or email to [email protected] for help.
#### Generate Demo Results
```
python test_script.py --img_path xxx.jpg --audio_path xxx.wav --phoneme_path xxx.json --save_dir "YOUR_DIR"
```
Note that the input images must keep the same height and width and the face should be appropriately cropped as in `samples/imgs`. You can also preprocess your images with `image_preprocess.py`.
#### License and Citation
```
@InProceedings{wang2021one,
author = Suzhen Wang, Lincheng Li, Yu Ding, Xin Yu
title = {One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning},
booktitle = {AAAI 2022},
year = {2022},
}
```
#### Acknowledgement
This codebase is based on [First Order Motion Model](https://github.com/AliaksandrSiarohin/first-order-model) and [imaginaire](https://github.com/NVlabs/imaginaire), thanks for their contributions.
|
FrixH2022/counindivixen | FrixH2022 | 2023-06-04T08:38:06Z | 0 | 3 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-04T05:11:16Z | ---
license: creativeml-openrail-m
---
indigo16融合0.5的counterfeit3,furryvixen融合0.3的counterfeit3,再两者0.5混合。比较适合配合novel系的兽人角色lora出二次元风格兽人角色,没lora直接用会出人皮售人 |
camenduru/pocketsphinx-20.04-a10 | camenduru | 2023-06-04T08:22:53Z | 0 | 0 | null | [
"region:us"
] | null | 2023-06-04T08:22:46Z | PocketSphinx 5.0.1
==================
This is PocketSphinx, one of Carnegie Mellon University's open source large
vocabulary, speaker-independent continuous speech recognition engines.
Although this was at one point a research system, active development
has largely ceased and it has become very, very far from the state of
the art. I am making a release, because people are nonetheless using
it, and there are a number of historical errors in the build system
and API which needed to be corrected.
The version number is strangely large because there was a "release"
that people are using called 5prealpha, and we will use proper
[semantic versioning](https://semver.org/) from now on.
**Please see the LICENSE file for terms of use.**
Installation
------------
We now use CMake for building, which should give reasonable results
across Linux and Windows. Not certain about Mac OS X because I don't
have one of those. In addition, the audio library, which never really
built or worked correctly on any platform at all, has simply been
removed.
There is no longer any dependency on SphinxBase. There is no
SphinxBase anymore. This is not the SphinxBase you're looking for.
All your SphinxBase are belong to us.
To install the Python module in a virtual environment (replace
`~/ve_pocketsphinx` with the virtual environment you wish to create),
from the top level directory:
```
python3 -m venv ~/ve_pocketsphinx
. ~/ve_pocketsphinx/bin/activate
pip install .
```
To install the C library and bindings (assuming you have access to
/usr/local - if not, use `-DCMAKE_INSTALL_PREFIX` to set a different
prefix in the first `cmake` command below):
```
cmake -S . -B build
cmake --build build
cmake --build build --target install
```
Usage
-----
The `pocketsphinx` command-line program reads single-channel 16-bit
PCM audio from standard input or one or more files, and attemps to
recognize speech in it using the default acoustic and language model.
It accepts a large number of options which you probably don't care
about, a *command* which defaults to `live`, and one or more inputs
(except in `align` mode), or `-` to read from standard input.
If you have a single-channel WAV file called "speech.wav" and you want
to recognize speech in it, you can try doing this (the results may not
be wonderful):
pocketsphinx single speech.wav
If your input is in some other format I suggest converting it with
`sox` as described below.
The commands are as follows:
- `help`: Print a long list of those options you don't care about.
- `config`: Dump configuration as JSON to standard output (can be
loaded with the `-config` option).
- `live`: Detect speech segments in each input, run recognition
on them (using those options you don't care about), and write the
results to standard output in line-delimited JSON. I realize this
isn't the prettiest format, but it sure beats XML. Each line
contains a JSON object with these fields, which have short names
to make the lines more readable:
- `b`: Start time in seconds, from the beginning of the stream
- `d`: Duration in seconds
- `p`: Estimated probability of the recognition result, i.e. a
number between 0 and 1 representing the likelihood of the input
according to the model
- `t`: Full text of recognition result
- `w`: List of segments (usually words), each of which in turn
contains the `b`, `d`, `p`, and `t` fields, for start, end,
probability, and the text of the word. If `-phone_align yes`
has been passed, then a `w` field will be present containing
phone segmentations, in the same format.
- `single`: Recognize each input as a single utterance, and write a
JSON object in the same format described above.
- `align`: Align a single input file (or `-` for standard input) to
a word sequence, and write a JSON object in the same format
described above. The first positional argument is the input, and
all subsequent ones are concatenated to make the text, to avoid
surprises if you forget to quote it. You are responsible for
normalizing the text to remove punctuation, uppercase, centipedes,
etc. For example:
pocketsphinx align goforward.wav "go forward ten meters"
By default, only word-level alignment is done. To get phone
alignments, pass `-phone_align yes` in the flags, e.g.:
pocketsphinx -phone_align yes align audio.wav $text
This will make not particularly readable output, but you can use
[jq](https://stedolan.github.io/jq/) to clean it up. For example,
you can get just the word names and start times like this:
pocketsphinx align audio.wav $text | jq '.w[]|[.t,.b]'
Or you could get the phone names and durations like this:
pocketsphinx -phone_align yes align audio.wav $text | jq '.w[]|.w[]|[.t,.d]'
There are many, many other possibilities, of course.
- `soxflags`: Return arguments to `sox` which will create the
appropriate input format. Note that because the `sox`
command-line is slightly quirky these must always come *after* the
filename or `-d` (which tells `sox` to read from the microphone).
You can run live recognition like this:
sox -d $(pocketsphinx soxflags) | pocketsphinx -
or decode from a file named "audio.mp3" like this:
sox audio.mp3 $(pocketsphinx soxflags) | pocketsphinx -
By default only errors are printed to standard error, but if you want
more information you can pass `-loglevel INFO`. Partial results are
not printed, maybe they will be in the future, but don't hold your
breath.
Programming
-----------
For programming, see the [examples directory](./examples/) for a
number of examples of using the library from C and Python. You can
also read the [documentation for the Python
API](https://pocketsphinx.readthedocs.io) or [the C
API](https://cmusphinx.github.io/doc/pocketsphinx/)
Authors
-------
PocketSphinx is ultimately based on `Sphinx-II` which in turn was
based on some older systems at Carnegie Mellon University, which were
released as free software under a BSD-like license thanks to the
efforts of Kevin Lenzo. Much of the decoder in particular was written
by Ravishankar Mosur (look for "rkm" in the comments), but various
other people contributed as well, see [the AUTHORS file](./AUTHORS)
for more details.
David Huggins-Daines (the author of this document) is
guilty^H^H^H^H^Hresponsible for creating `PocketSphinx` which added
various speed and memory optimizations, fixed-point computation, JSGF
support, portability to various platforms, and a somewhat coherent
API. He then disappeared for a while.
Nickolay Shmyrev took over maintenance for quite a long time
afterwards, and a lot of code was contributed by Alexander Solovets,
Vyacheslav Klimkov, and others.
Currently this is maintained by David Huggins-Daines again.
|
jalaluddin94/nli_mbert | jalaluddin94 | 2023-06-04T08:16:00Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-15T02:30:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: nli_mbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli_mbert
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6569
- Accuracy: 0.7419
- Precision: 0.7419
- Recall: 0.7419
- F1 Score: 0.7426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 101
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:--------:|
| 1.403 | 1.0 | 10330 | 1.3860 | 0.7128 | 0.7128 | 0.7128 | 0.7142 |
| 1.3213 | 2.0 | 20660 | 1.3367 | 0.7365 | 0.7365 | 0.7365 | 0.7371 |
| 1.1611 | 3.0 | 30990 | 1.4699 | 0.7396 | 0.7396 | 0.7396 | 0.7406 |
| 1.0222 | 4.0 | 41320 | 1.6050 | 0.7374 | 0.7374 | 0.7374 | 0.7383 |
| 0.9008 | 5.0 | 51650 | 1.6569 | 0.7419 | 0.7419 | 0.7419 | 0.7426 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Hans14/Snowball-Target | Hans14 | 2023-06-04T08:13:07Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-06-04T08:13:02Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: Hans14/Snowball-Target
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sw1214/Reinforce-0 | sw1214 | 2023-06-04T08:05:57Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T08:05:48Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
aphexblake/599-new-blows | aphexblake | 2023-06-04T08:04:54Z | 4 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:aphexblake/677-500-lop",
"base_model:adapter:aphexblake/677-500-lop",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-04T08:04:50Z | ---
license: creativeml-openrail-m
base_model: aphexblake/new-msf
instance_prompt: Blowjob
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - 599-new-blows
These are LoRA adaption weights for [aphexblake/new-msf](https://huggingface.co/aphexblake/new-msf). The weights were trained on the instance prompt "Blowjob" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
|
kevinpro/Vicuna-7B-CoT | kevinpro | 2023-06-04T08:04:42Z | 8 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-01T12:14:15Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
SFT to enhance the CoT capabiliy of Vicuna
If you find the model helpful, please click "like" to support us. We also welcome feedback on your usage experience and any issues you encounter in the issues section.
Another 13B version: https://huggingface.co/kevinpro/Vicuna-13B-CoT
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cetusian/PPO-Huggy | cetusian | 2023-06-04T07:42:47Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-04T07:42:41Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: cetusian/PPO-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
okingjo/Single-identifier_LORA_Model | okingjo | 2023-06-04T07:28:32Z | 0 | 8 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-26T07:22:43Z | ---
license: creativeml-openrail-m
---
# Okingjo's Single-identifier LORAs
I will share most of my LORA model with single identifier here. by saying "single", only one charater with one costume are stored within the model.
Not only will the LORA model will be post here, training setups and tips will also be shared.
I`m still in the state of learning, so any comments/feedbacks are welcom!
## Characters from Genshin Impact
### Sangonomiya-Kokomi / 珊瑚宫心海
#### Brief intro
LORA of Sangonomiya Kokomi, with her default costume in game.
civitAI page [download](https://civitai.com/models/9186/sangonomiya-kokomi)
#### Training dataset
149 images of Kokomi:
* 4 nude illustrations, to ensure the AI knows that the costume is removable
* 85 normal illustrations of Kokomi, multiple angle, style and composition
* 30 nude 360 degree snapshot of Kokomi's 3D model
* 30 normal 360 degree snapshot of Kokomi's 3D model
Since only one costume is included, all 149 images are placed inside one folder.
#### Captioning
WD14 captioning instead of the danbooru caption was used, since the former one will not crop/resize the images.
Threshold are usually set to 0.75-0.8. since I don't like to have a very long and sometimes inaccurate caption for my training data.
After captionin is done, I added "sangonomiya kokomi" after "1girl" to every caption file generate as the triggering prompt. Some of the caption files were empty so I have to manually type the words.
#### Training setup
Trained with Kohya_SS stable diffusion trainer
Base model was [Anything V3.0 full](https://huggingface.co/Linaqruf/anything-v3.0/blob/main/anything-v3-fp32-pruned.safetensors)
Trainig process consist of two phases. The first one with default parameters of:
* learning_rate: 0.0001
* text_encoder_lr: 5e-5
* unet_lr: 0.0001
20 repeats, and 5 epoch
Then, for phase2, all three learning rate were decreased to 1/10, and trained with another 5 epochs.
#### results
V1.0 samples



## Characters from Honkai Impact 3rd
### Raiden Mei adult ver / 雷电芽衣
#### Brief intro
LORA of the adult Raiden Mei from Honkai Impact 3rd, Post-Honkai Odyssey, with her default costume in game.
civitAI page [download](https://civitai.com/models/13023/raiden-mei-adult-ver)
#### Training dataset
96 images of Raiden Mei:
* 36 illustrations. both SFW and NSFW, 3 of them are with other costumes.
* 30 360degree 3D model snapshots for accuracy.
* 30 360degree 3D model nude snapshot to ensure the costume is removable/replacable.
Since only one costume is included, all 96 images are placed inside one folder.
#### Captioning
WD14 captioning instead of the danbooru caption was used, since the former one will not crop/resize the images.
Threshold are usually set to 0.75-0.8. since I don't like to have a very long and sometimes inaccurate caption for my training data.
After captionin is done, I added "raiden mei" after "1girl" to every caption file generate as the triggering prompt. Some of the caption files were empty so I have to manually type the words.
#### Training setup
Trained with Kohya_SS stable diffusion trainer
Base model was [Anything V3.0 full](https://huggingface.co/Linaqruf/anything-v3.0/blob/main/anything-v3-fp32-pruned.safetensors)
Trainig process consist of two phases. The first one with default parameters of:
* learning_rate: 0.0001
* text_encoder_lr: 5e-5
* unet_lr: 0.0001
20 repeats, and 3 epoch
Then, for phase2, all three learning rate were decreased to 1/10, and trained with another 8 epochs.
#### results
V1.0 samples


|
Skanderbeg/a2c-AntBulletEnv-v0 | Skanderbeg | 2023-06-04T07:23:33Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T07:22:30Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1318.05 +/- 153.45
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
KaiquanMah/LunarLander-PPO | KaiquanMah | 2023-06-04T07:18:40Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T07:18:33Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -127.91 +/- 41.55
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'KaiquanMah/LunarLander-PPO'
'batch_size': 512
'minibatch_size': 128}
```
|
mallikrao2/QA_MODEL | mallikrao2 | 2023-06-04T06:55:14Z | 61 | 1 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-04T06:49:50Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: QA_MODEL
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# QA_MODEL
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16596, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.8.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48 | gokuls | 2023-06-04T06:49:42Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-06-01T23:11:11Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v2_complete_training_new_wt_init_48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v2_complete_training_new_wt_init_48
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6821
- Accuracy: 0.5170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 6.4035 | 0.08 | 10000 | 6.3587 | 0.1308 |
| 6.171 | 0.16 | 20000 | 6.1316 | 0.1478 |
| 4.1594 | 0.25 | 30000 | 3.9287 | 0.3710 |
| 3.7683 | 0.33 | 40000 | 3.5265 | 0.4190 |
| 3.5679 | 0.41 | 50000 | 3.3359 | 0.4400 |
| 3.4509 | 0.49 | 60000 | 3.2192 | 0.4534 |
| 3.3501 | 0.57 | 70000 | 3.1324 | 0.4631 |
| 3.2776 | 0.66 | 80000 | 3.0619 | 0.4713 |
| 3.211 | 0.74 | 90000 | 3.0021 | 0.4779 |
| 3.1587 | 0.82 | 100000 | 2.9570 | 0.4836 |
| 3.1076 | 0.9 | 110000 | 2.9157 | 0.4883 |
| 3.0716 | 0.98 | 120000 | 2.8727 | 0.4931 |
| 3.0248 | 1.07 | 130000 | 2.8422 | 0.4969 |
| 2.9941 | 1.15 | 140000 | 2.8102 | 0.5009 |
| 2.9629 | 1.23 | 150000 | 2.7851 | 0.5041 |
| 2.9422 | 1.31 | 160000 | 2.7617 | 0.5065 |
| 2.9062 | 1.39 | 170000 | 2.7347 | 0.5102 |
| 2.8847 | 1.47 | 180000 | 2.7163 | 0.5126 |
| 2.8556 | 1.56 | 190000 | 2.6974 | 0.5148 |
| 2.8483 | 1.64 | 200000 | 2.6821 | 0.5170 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nolanaatama/lwscpldrvc800pchsvrs | nolanaatama | 2023-06-04T06:46:17Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-04T06:39:24Z | ---
license: creativeml-openrail-m
---
|
iamkzntsv/simple-diffusion-butterflies-32 | iamkzntsv | 2023-06-04T06:39:26Z | 30 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2023-06-04T06:36:43Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('iamkzntsv/simple-diffusion-butterflies-32')
image = pipeline().images[0]
image
```
|
gokuls/bert_12_layer_model_v2_complete_training_new_48 | gokuls | 2023-06-04T06:34:49Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-06-01T22:58:08Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v2_complete_training_new_48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v2_complete_training_new_48
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5669
- Accuracy: 0.4078
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 6.5761 | 0.08 | 10000 | 6.5400 | 0.1270 |
| 6.3285 | 0.16 | 20000 | 6.3050 | 0.1411 |
| 6.2279 | 0.25 | 30000 | 6.2128 | 0.1449 |
| 6.1754 | 0.33 | 40000 | 6.1535 | 0.1478 |
| 6.1291 | 0.41 | 50000 | 6.1181 | 0.1488 |
| 6.1008 | 0.49 | 60000 | 6.0846 | 0.1495 |
| 6.0716 | 0.57 | 70000 | 6.0609 | 0.1504 |
| 5.9041 | 0.66 | 80000 | 5.8688 | 0.1577 |
| 5.7999 | 0.74 | 90000 | 5.7595 | 0.1691 |
| 5.6997 | 0.82 | 100000 | 5.6469 | 0.1828 |
| 5.6002 | 0.9 | 110000 | 5.5358 | 0.1963 |
| 5.4372 | 0.98 | 120000 | 5.3113 | 0.2253 |
| 5.0465 | 1.07 | 130000 | 4.8765 | 0.2743 |
| 4.7373 | 1.15 | 140000 | 4.5536 | 0.3095 |
| 4.3779 | 1.23 | 150000 | 4.2078 | 0.3417 |
| 4.1299 | 1.31 | 160000 | 3.9910 | 0.3630 |
| 3.9585 | 1.39 | 170000 | 3.8347 | 0.3798 |
| 3.8423 | 1.47 | 180000 | 3.7274 | 0.3911 |
| 3.7403 | 1.56 | 190000 | 3.6422 | 0.3996 |
| 3.6767 | 1.64 | 200000 | 3.5669 | 0.4078 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
HDiffusion/Metharme13B-ggmlv3-q4_0 | HDiffusion | 2023-06-04T06:31:02Z | 0 | 0 | null | [
"region:us"
] | null | 2023-06-04T05:10:21Z | Metharme13B merged and quantized to to 4bit for use with llama.cpp and other ggml applications. |
Tejas2000/Wav2Vec_Deploy | Tejas2000 | 2023-06-04T06:30:18Z | 141 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"speech_to_text",
"audio",
"speech",
"xlsr-fine-tuning-week",
"mr",
"dataset:openslr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-06-04T06:20:24Z | ---
license: apache-2.0
datasets:
- openslr
language:
- mr
metrics:
- wer
pipeline_tag: automatic-speech-recognition
tags:
- speech_to_text
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
model-index:
- name: XLSR Wav2Vec2 Large 53 Marathi by Sumedh Khodke
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR mr
type: openslr
metrics:
- name: Test WER
type: wer
value: 12.7
---
# Wav2Vec2-Large-XLSR-53-Marathi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Marathi using the [Open SLR64](http://openslr.org/64/) dataset. When using this model, make sure that your speech input is sampled at 16kHz. This data contains only female voices but the model works well for male voices too. Trained on Google Colab Pro on Tesla P100 16GB GPU.<br>
**WER (Word Error Rate) on the Test Set**: 12.70 %
## Usage
The model can be used directly without a language model as follows, given that your dataset has Marathi `actual_text` and `path_in_folder` columns:
```python
import torch, torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
#Since marathi is not present on Common Voice, script for reading the below dataset can be picked up from the eval script below
mr_test_dataset = all_data['test']
processor = Wav2Vec2Processor.from_pretrained("Tejas2000/SpeechRecog")
model = Wav2Vec2ForCTC.from_pretrained("Tejas2000/SpeechRecog")
resampler = torchaudio.transforms.Resample(48_000, 16_000) #first arg - input sample, second arg - output sample
# Preprocessing the datasets. We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path_in_folder"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
mr_test_dataset = mr_test_dataset.map(speech_file_to_array_fn)
inputs = processor(mr_test_dataset["speech"][:5], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", mr_test_dataset["actual_text"][:5])
```
## Evaluation
Evaluated on 10% of the Marathi data on Open SLR-64.
```python
import os, re, torch, torchaudio
from datasets import Dataset, load_metric
import pandas as pd
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
#below is a custom script to be used for reading marathi dataset since its not present on the Common Voice
dataset_path = "./OpenSLR-64_Marathi/mr_in_female/" #TODO : include the path of the dataset extracted from http://openslr.org/64/
audio_df = pd.read_csv(os.path.join(dataset_path,'line_index.tsv'),sep='\t',header=None)
audio_df.columns = ['path_in_folder','actual_text']
audio_df['path_in_folder'] = audio_df['path_in_folder'].apply(lambda x: dataset_path + x + '.wav')
audio_df = audio_df.sample(frac=1, random_state=2020).reset_index(drop=True) #seed number is important for reproducibility of WER score
all_data = Dataset.from_pandas(audio_df)
all_data = all_data.train_test_split(test_size=0.10,seed=2020) #seed number is important for reproducibility of WER score
mr_test_dataset = all_data['test']
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Tejas2000/SpeechRecog")
model = Wav2Vec2ForCTC.from_pretrained("Tejas2000/SpeechRecog")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets. We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["actual_text"] = re.sub(chars_to_ignore_regex, '', batch["actual_text"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path_in_folder"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
mr_test_dataset = mr_test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = mr_test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["actual_text"])))
```
## Training
Train-Test ratio was 90:10.
The training notebook Colab link [here](https://colab.research.google.com/drive/1wX46fjExcgU5t3AsWhSPTipWg_aMDg2f?usp=sharing).
## Training Config and Summary
weights-and-biases run summary [here](https://wandb.ai/wandb/xlsr/runs/3itdhtb8/overview?workspace=user-sumedhkhodke) |
gokuls/bert_12_layer_model_v1_complete_training_new_48 | gokuls | 2023-06-04T06:29:24Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-06-01T22:57:41Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v1_complete_training_new_48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v1_complete_training_new_48
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9912
- Accuracy: 0.4898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 6.5749 | 0.08 | 10000 | 6.5381 | 0.1270 |
| 6.333 | 0.16 | 20000 | 6.3097 | 0.1410 |
| 6.2341 | 0.25 | 30000 | 6.2179 | 0.1450 |
| 6.1803 | 0.33 | 40000 | 6.1586 | 0.1478 |
| 6.0775 | 0.41 | 50000 | 6.0471 | 0.1520 |
| 5.8957 | 0.49 | 60000 | 5.8458 | 0.1655 |
| 5.7655 | 0.57 | 70000 | 5.7040 | 0.1846 |
| 5.6281 | 0.66 | 80000 | 5.5480 | 0.2026 |
| 5.1797 | 0.74 | 90000 | 5.0004 | 0.2661 |
| 4.7518 | 0.82 | 100000 | 4.5751 | 0.3097 |
| 4.3368 | 0.9 | 110000 | 4.1455 | 0.3518 |
| 3.9513 | 0.98 | 120000 | 3.7659 | 0.3964 |
| 3.682 | 1.07 | 130000 | 3.5328 | 0.4248 |
| 3.5114 | 1.15 | 140000 | 3.3715 | 0.4441 |
| 3.3789 | 1.23 | 150000 | 3.2500 | 0.4591 |
| 3.2776 | 1.31 | 160000 | 3.1468 | 0.4709 |
| 3.204 | 1.39 | 170000 | 3.0899 | 0.4784 |
| 3.1051 | 1.47 | 180000 | 2.9912 | 0.4898 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Rajashekar47/Speech | Rajashekar47 | 2023-06-04T06:07:18Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2023-06-04T06:06:11Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fatcat22/q-Taxi-v3 | fatcat22 | 2023-06-04T06:03:33Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T06:03:31Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="fatcat22/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
software-vagabond/a2c-PandaReachDense-v2 | software-vagabond | 2023-06-04T05:47:17Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-02T11:16:48Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.68 +/- 0.14
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Itsmealvi/LarissaRc | Itsmealvi | 2023-06-04T05:42:53Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-04T05:37:47Z | ---
license: creativeml-openrail-m
---
|
Shridipta-06/ppo-LunarLander-v2 | Shridipta-06 | 2023-06-04T05:37:37Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T05:37:13Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.91 +/- 18.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pulkitmehtawork/text_classification_pulkit | pulkitmehtawork | 2023-06-04T05:11:07Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-04T04:18:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: text_classification_pulkit
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9318
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_classification_pulkit
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2319
- Accuracy: 0.9318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2329 | 1.0 | 1563 | 0.1903 | 0.9268 |
| 0.1494 | 2.0 | 3126 | 0.2319 | 0.9318 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
InfiniteMoon/00loha | InfiniteMoon | 2023-06-04T05:03:36Z | 0 | 1 | null | [
"license:gpl",
"region:us"
] | null | 2023-06-04T04:58:05Z | ---
license: gpl
---
经由本人同意制作的lycoris模型,没有触发词,擅长男福瑞,也能画美少女,女福瑞和普通男性效果一般。推荐使用14号模型。非常不建议用V1.2的模型,那个过拟合严重。 |
limcheekin/flan-alpaca-gpt4-xl-ct2 | limcheekin | 2023-06-04T05:02:53Z | 2 | 0 | transformers | [
"transformers",
"ctranslate2",
"flan-alpaca-gpt4-xl",
"quantization",
"int8",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-01T08:17:12Z | ---
license: apache-2.0
language:
- en
tags:
- ctranslate2
- flan-alpaca-gpt4-xl
- quantization
- int8
---
# Model Card for Flan-Alpaca-GPT4-XL Q8
The model is quantized version of the [declare-lab/flan-alpaca-gpt4-xl](https://huggingface.co/declare-lab/flan-alpaca-gpt4-xl) with int8 quantization.
## Model Details
### Model Description
The model being quantized using [CTranslate2](https://opennmt.net/CTranslate2/) with the following command:
```
ct2-transformers-converter --model declare-lab/flan-alpaca-gpt4-xl --output_dir declare-lab/flan-alpaca-gpt4-xl-ct2 --copy_files generation_config.json tokenizer.json tokenizer_config.json special_tokens_map.json spiece.model --quantization int8 --force --low_cpu_mem_usage
```
If you want to perform the quantization yourself, you need to install the following dependencies:
```
pip install -qU ctranslate2 transformers[torch] sentencepiece accelerate
```
- **Shared by:** Lim Chee Kin
- **License:** Apache 2.0
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import ctranslate2
import transformers
translator = ctranslate2.Translator("limcheekin/flan-alpaca-gpt4-xl-ct2")
tokenizer = transformers.AutoTokenizer.from_pretrained("limcheekin/flan-alpaca-gpt4-xl-ct2")
input_text = "translate English to German: The house is wonderful."
input_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(input_text))
results = translator.translate_batch([input_tokens])
output_tokens = results[0].hypotheses[0]
output_text = tokenizer.decode(tokenizer.convert_tokens_to_ids(output_tokens))
print(output_text)
```
The code is taken from https://opennmt.net/CTranslate2/guides/transformers.html#t5.
The key method of the code above is `translate_batch`, you can find out [its supported parameters here](https://opennmt.net/CTranslate2/python/ctranslate2.Translator.html#ctranslate2.Translator.translate_batch).
|
Hiecheol/bert-base-cased-wikitext2 | Hiecheol | 2023-06-04T04:58:57Z | 198 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-06-04T04:14:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0905 | 1.0 | 2346 | 7.0503 |
| 6.9026 | 2.0 | 4692 | 6.8756 |
| 6.8795 | 3.0 | 7038 | 6.8918 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1+cu116
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aphexblake/sunset | aphexblake | 2023-06-04T04:52:41Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:aphexblake/200-msf-v2",
"base_model:adapter:aphexblake/200-msf-v2",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-04T04:52:41Z | ---
license: creativeml-openrail-m
base_model: aphexblake/200-msf-v2
instance_prompt: Sunset
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - sunset
These are LoRA adaption weights for [aphexblake/200-msf-v2](https://huggingface.co/aphexblake/200-msf-v2). The weights were trained on the instance prompt "Sunset" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
|
aphexblake/lora-dreambooth-2023-06-04-06-50-45 | aphexblake | 2023-06-04T04:50:53Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:aphexblake/200-msf-v2",
"base_model:adapter:aphexblake/200-msf-v2",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-04T04:50:52Z | ---
license: creativeml-openrail-m
base_model: aphexblake/200-msf-v2
instance_prompt: Doggy style
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - lora-dreambooth-2023-06-04-06-50-45
These are LoRA adaption weights for [aphexblake/200-msf-v2](https://huggingface.co/aphexblake/200-msf-v2). The weights were trained on the instance prompt "Doggy style" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
|
alfarez404/badpurpose | alfarez404 | 2023-06-04T04:28:22Z | 0 | 0 | null | [
"license:deepfloyd-if-license",
"region:us"
] | null | 2023-06-04T04:28:22Z | ---
license: deepfloyd-if-license
---
|
Hiecheol/gpt2-wikitext2 | Hiecheol | 2023-06-04T04:14:09Z | 174 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-04T03:24:09Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5594 | 1.0 | 2249 | 6.4748 |
| 6.194 | 2.0 | 4498 | 6.2026 |
| 6.0181 | 3.0 | 6747 | 6.1142 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1+cu116
- Datasets 2.12.0
- Tokenizers 0.13.3
|
PabloGuinea/bert-base | PabloGuinea | 2023-06-04T03:51:36Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-04T03:39:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.42627737226277373
- name: Recall
type: recall
value: 0.2706209453197405
- name: F1
type: f1
value: 0.3310657596371882
- name: Accuracy
type: accuracy
value: 0.9382241032875892
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2898
- Precision: 0.4263
- Recall: 0.2706
- F1: 0.3311
- Accuracy: 0.9382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 107 | 0.3180 | 0.3739 | 0.1511 | 0.2152 | 0.9341 |
| No log | 2.0 | 214 | 0.2898 | 0.4263 | 0.2706 | 0.3311 | 0.9382 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
zshahzad/finetuning-sentiment-model-3000-samples | zshahzad | 2023-06-04T03:40:12Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-31T21:31:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8566666666666667
- name: F1
type: f1
value: 0.86084142394822
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3227
- Accuracy: 0.8567
- F1: 0.8608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gsn-codes/dqn-SpaceInvadersNoFrameskip-v4 | gsn-codes | 2023-06-04T02:57:53Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-04T02:57:17Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 595.50 +/- 155.84
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gsn-codes -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gsn-codes -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga gsn-codes
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
uzanzz/Risa | uzanzz | 2023-06-04T02:54:35Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-04T02:52:51Z | ---
license: creativeml-openrail-m
---
|
nolanaatama/jjsbjtr15krvcstpsncgm | nolanaatama | 2023-06-04T02:10:58Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-04T02:07:50Z | ---
license: creativeml-openrail-m
---
|
Subsets and Splits