modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
GabrielCaido/ppo-Huggy | GabrielCaido | 2023-06-29T14:50:49Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-29T14:50:38Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: GabrielCaido/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TieIncred/pokemon-lora | TieIncred | 2023-06-29T14:45:29Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-29T12:30:08Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - TieIncred/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
asti339/emotions2 | asti339 | 2023-06-29T14:42:03Z | 4 | 2 | tf-keras | [
"tf-keras",
"image-classification",
"region:us"
] | image-classification | 2023-06-24T13:33:43Z | ---
pipeline_tag: image-classification
--- |
Malaika/Reinforce-Pixelcopter-PLE-v0-Test3 | Malaika | 2023-06-29T14:36:10Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T14:36:07Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0-Test3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 19.90 +/- 17.51
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
username93/8C_ML_U2_P_RL_Huggy | username93 | 2023-06-29T14:33:29Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-29T14:33:07Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: username93/8C_ML_U2_P_RL_Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AAOBA/ppo-Huggy | AAOBA | 2023-06-29T14:32:27Z | 17 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-29T13:52:11Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chikoto/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
amm297/aux | amm297 | 2023-06-29T14:18:38Z | 34 | 0 | peft | [
"peft",
"text-generation",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-29T11:22:02Z | ---
library_name: peft
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0 |
jcr987/distilhubert-finetuned-gtzan | jcr987 | 2023-06-29T14:06:28Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-06-29T11:55:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5890
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7983 | 1.0 | 113 | 1.8827 | 0.4 |
| 1.1998 | 2.0 | 226 | 1.2412 | 0.66 |
| 1.0158 | 3.0 | 339 | 0.9866 | 0.74 |
| 0.7012 | 4.0 | 452 | 0.7353 | 0.81 |
| 0.5321 | 5.0 | 565 | 0.7164 | 0.78 |
| 0.3458 | 6.0 | 678 | 0.6390 | 0.81 |
| 0.2513 | 7.0 | 791 | 0.5696 | 0.83 |
| 0.3806 | 8.0 | 904 | 0.6538 | 0.8 |
| 0.1816 | 9.0 | 1017 | 0.6225 | 0.82 |
| 0.3578 | 10.0 | 1130 | 0.5890 | 0.83 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tlapusan/bert-finetuned-ner_tmp | tlapusan | 2023-06-29T14:04:14Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-29T13:56:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner_tmp
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9303630363036304
- name: Recall
type: recall
value: 0.9488387748232918
- name: F1
type: f1
value: 0.9395100816530578
- name: Accuracy
type: accuracy
value: 0.9860628716077
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner_tmp
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0602
- Precision: 0.9304
- Recall: 0.9488
- F1: 0.9395
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0858 | 1.0 | 1756 | 0.0679 | 0.9210 | 0.9359 | 0.9284 | 0.9829 |
| 0.0343 | 2.0 | 3512 | 0.0602 | 0.9304 | 0.9488 | 0.9395 | 0.9861 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jcnecio/rl_course_vizdoom_health_gathering_supreme | jcnecio | 2023-06-29T13:55:20Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T13:55:15Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.93 +/- 5.92
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r jcnecio/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
sleepynlp/Reinforce-Pixelcopter-PLE-v0-Leo | sleepynlp | 2023-06-29T13:31:13Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T13:31:12Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0-Leo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -2.70 +/- 0.46
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
blackstone/spkrec-ecapa-cnceleb | blackstone | 2023-06-29T13:19:09Z | 0 | 0 | speechbrain | [
"speechbrain",
"embeddings",
"Speaker",
"Verification",
"Identification",
"pytorch",
"ECAPA",
"TDNN",
"audio-classification",
"en",
"dataset:voxceleb",
"arxiv:2106.04624",
"license:apache-2.0",
"region:us"
] | audio-classification | 2023-06-29T12:42:49Z | ---
language: en
thumbnail: null
tags:
- speechbrain
- embeddings
- Speaker
- Verification
- Identification
- pytorch
- ECAPA
- TDNN
license: apache-2.0
datasets:
- voxceleb
metrics:
- EER
widget:
- example_title: VoxCeleb Speaker id10003
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
pipeline_tag: audio-classification
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Speaker Verification with ECAPA-TDNN on CNCeleb
This repository a pretrained ECAPA-TDNN model using SpeechBrain.
The system can be used to extract speaker embeddings as well.
It is trained on CNCeleb1 + CNCeleb2 training data.
The model performance on CNCeleb1-test set(Cleaned) is:
| Release | EER(%) | MinDCF(p=0.01) |
|:-------------:|:--------------:|:--------------:|
| 15-05-22 | 8.44 | 0.4587 |
## Pipeline description
This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
You can find our training results (models, logs, etc) [here]().
### Compute your speaker embeddings
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
classifier = EncoderClassifier.from_hparams(source="blackstone/spkrec-ecapa-cnceleb")
signal, fs = torchaudio.load('tests/samples/ASR/spk1_snt1.wav')
embeddings = classifier.encode_batch(signal)
```
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*.
### Perform Speaker Verification
```python
from speechbrain.pretrained import SpeakerRecognition
verification = SpeakerRecognition.from_hparams(source="blackstone/spkrec-ecapa-voxceleb", savedir="pretrained_models/spkrec-ecapa-cnceleb")
score, prediction = verification.verify_files("tests/samples/ASR/spk1_snt1.wav", "tests/samples/ASR/spk2_snt1.wav") # Different Speakers
score, prediction = verification.verify_files("tests/samples/ASR/spk1_snt1.wav", "tests/samples/ASR/spk1_snt2.wav") # Same Speaker
```
The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
#### References
```
@inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
author = {Brecht Desplanques and
Jenthe Thienpondt and
Kris Demuynck},
editor = {Helen Meng and
Bo Xu and
Thomas Fang Zheng},
title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
in {TDNN} Based Speaker Verification},
booktitle = {Interspeech 2020},
pages = {3830--3834},
publisher = {{ISCA}},
year = {2020},
}
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
``` |
dar-tau/Reinforce-CartPole-v1 | dar-tau | 2023-06-29T13:09:20Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T12:58:10Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 465.40 +/- 74.22
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
cgutknecht/gelectra_large_gsqd-gq-LHM | cgutknecht | 2023-06-29T12:52:17Z | 115 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"electra",
"question-answering",
"de",
"dataset:squad",
"dataset:deepset/germanquad",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-05T09:41:43Z | ---
license: mit
datasets:
- squad
- deepset/germanquad
language:
- de
---
# Overview
German QA-Model finetuned on Question-Answer-Pairs for Bürgerbüro-Service-Documents
**Base model:** deepset/gelectra-large
**Finetuning** in sequential steps on:
1. Machine-translated (en->de) SQuAD 1.0
2. GermanQuAD: deepset/germanquad
3. Custom LHM-QA-Dataset (>reference following<)
**Evaluation:** Reaches a performance of 70,0 F1-Score on LHM-QA-testdata |
ahishamm/vit-huge-modified-augmented-ph2-patch-14 | ahishamm | 2023-06-29T12:50:06Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-29T12:27:18Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-huge-modified-augmented-ph2-patch-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-huge-modified-augmented-ph2-patch-14
This model is a fine-tuned version of [google/vit-huge-patch14-224-in21k](https://huggingface.co/google/vit-huge-patch14-224-in21k) on the ahishamm/Modified_Augmented_PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0012
- Accuracy: 1.0
- Recall: 1.0
- F1: 1.0
- Precision: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.0996 | 0.29 | 50 | 0.1378 | 0.9366 | 0.9366 | 0.9366 | 0.9366 |
| 0.0096 | 0.59 | 100 | 0.0509 | 0.9743 | 0.9743 | 0.9743 | 0.9743 |
| 0.0049 | 0.88 | 150 | 0.0085 | 0.9983 | 0.9983 | 0.9983 | 0.9983 |
| 0.0029 | 1.18 | 200 | 0.0037 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0022 | 1.47 | 250 | 0.0028 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0018 | 1.76 | 300 | 0.0022 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0015 | 2.06 | 350 | 0.0021 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0013 | 2.35 | 400 | 0.0017 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0011 | 2.65 | 450 | 0.0015 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0011 | 2.94 | 500 | 0.0014 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.001 | 3.24 | 550 | 0.0013 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0009 | 3.53 | 600 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0009 | 3.82 | 650 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SHENMU007/neunit_BASE_V10.12 | SHENMU007 | 2023-06-29T12:46:28Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-06-29T09:48:12Z | ---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jaderfigueredo/ppo-Huggy | jaderfigueredo | 2023-06-29T12:42:22Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-29T12:42:18Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jaderfigueredo/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ahishamm/vit-large-modified-augmented-ph2-patch-32 | ahishamm | 2023-06-29T12:26:49Z | 191 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-29T12:12:08Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-modified-augmented-ph2-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-modified-augmented-ph2-patch-32
This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/Modified_Augmented_PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
- Accuracy: 1.0
- Recall: 1.0
- F1: 1.0
- Precision: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.1255 | 0.29 | 50 | 0.1555 | 0.9538 | 0.9538 | 0.9538 | 0.9538 |
| 0.0875 | 0.59 | 100 | 0.0656 | 0.9726 | 0.9726 | 0.9726 | 0.9726 |
| 0.0612 | 0.88 | 150 | 0.0219 | 0.9949 | 0.9949 | 0.9949 | 0.9949 |
| 0.0034 | 1.18 | 200 | 0.0031 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0021 | 1.47 | 250 | 0.0022 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0017 | 1.76 | 300 | 0.0017 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0014 | 2.06 | 350 | 0.0015 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0012 | 2.35 | 400 | 0.0013 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0011 | 2.65 | 450 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.001 | 2.94 | 500 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.001 | 3.24 | 550 | 0.0010 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0009 | 3.53 | 600 | 0.0009 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0009 | 3.82 | 650 | 0.0009 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Allenpai/alpaca-200 | Allenpai | 2023-06-29T12:22:16Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-29T12:21:29Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Shrawani/squad-bloom-3b-v1 | Shrawani | 2023-06-29T12:18:34Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-29T12:18:31Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
NickyNicky/mpt-7b-chat-Peft-h2ogpt_oig_oasst1_instruct-gpt4all-max_length_3072-V1 | NickyNicky | 2023-06-29T12:18:00Z | 2 | 1 | peft | [
"peft",
"region:us"
] | null | 2023-06-29T12:17:53Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
TecnoIA/Fistful_of_Yen_Internet_Meme | TecnoIA | 2023-06-29T12:17:00Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-29T12:14:02Z | ---
license: creativeml-openrail-m
---
|
ahishamm/vit-large-augmented-ph2-patch-32 | ahishamm | 2023-06-29T12:11:45Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-29T11:55:41Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-augmented-ph2-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-augmented-ph2-patch-32
This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/Augmented_PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5737
- Accuracy: 0.8701
- Recall: 0.8701
- F1: 0.8701
- Precision: 0.8701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.0405 | 0.36 | 50 | 0.6853 | 0.8342 | 0.8342 | 0.8342 | 0.8342 |
| 0.0107 | 0.72 | 100 | 0.8199 | 0.8256 | 0.8256 | 0.8256 | 0.8256 |
| 0.0338 | 1.09 | 150 | 0.5737 | 0.8701 | 0.8701 | 0.8701 | 0.8701 |
| 0.0026 | 1.45 | 200 | 0.6008 | 0.8684 | 0.8684 | 0.8684 | 0.8684 |
| 0.0019 | 1.81 | 250 | 0.6275 | 0.8735 | 0.8735 | 0.8735 | 0.8735 |
| 0.0016 | 2.17 | 300 | 0.6488 | 0.8735 | 0.8735 | 0.8735 | 0.8735 |
| 0.0013 | 2.54 | 350 | 0.6639 | 0.8752 | 0.8752 | 0.8752 | 0.8752 |
| 0.0012 | 2.9 | 400 | 0.6757 | 0.8752 | 0.8752 | 0.8752 | 0.8752 |
| 0.0011 | 3.26 | 450 | 0.6844 | 0.8735 | 0.8735 | 0.8735 | 0.8735 |
| 0.001 | 3.62 | 500 | 0.6895 | 0.8735 | 0.8735 | 0.8735 | 0.8735 |
| 0.001 | 3.99 | 550 | 0.6913 | 0.8735 | 0.8735 | 0.8735 | 0.8735 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jcnecio/ppo-LunarLander-v2-v2 | jcnecio | 2023-06-29T12:09:07Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T12:07:11Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -154.39 +/- 57.59
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'jcnecio/ppo-LunarLander-v2-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
ahishamm/vit-large-augmented-ph2-patch-16 | ahishamm | 2023-06-29T11:55:19Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-29T11:40:37Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-augmented-ph2-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-augmented-ph2-patch-16
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the ahishamm/Augmented_PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5307
- Accuracy: 0.8735
- Recall: 0.8735
- F1: 0.8735
- Precision: 0.8735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.2064 | 0.36 | 50 | 0.5307 | 0.8735 | 0.8735 | 0.8735 | 0.8735 |
| 0.1145 | 0.72 | 100 | 0.8837 | 0.7470 | 0.7470 | 0.7470 | 0.7470 |
| 0.4187 | 1.09 | 150 | 0.9485 | 0.6256 | 0.6256 | 0.6256 | 0.6256 |
| 0.0756 | 1.45 | 200 | 0.6959 | 0.8325 | 0.8325 | 0.8325 | 0.8325 |
| 0.0696 | 1.81 | 250 | 0.7697 | 0.8171 | 0.8171 | 0.8171 | 0.8171 |
| 0.0251 | 2.17 | 300 | 0.7361 | 0.8325 | 0.8325 | 0.8325 | 0.8325 |
| 0.0604 | 2.54 | 350 | 0.9345 | 0.8427 | 0.8427 | 0.8427 | 0.8427 |
| 0.0005 | 2.9 | 400 | 0.9581 | 0.8513 | 0.8513 | 0.8513 | 0.8513 |
| 0.0005 | 3.26 | 450 | 1.0674 | 0.8444 | 0.8444 | 0.8444 | 0.8444 |
| 0.005 | 3.62 | 500 | 0.9464 | 0.8564 | 0.8564 | 0.8564 | 0.8564 |
| 0.0002 | 3.99 | 550 | 0.9575 | 0.8564 | 0.8564 | 0.8564 | 0.8564 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
T-Systems-onsite/cross-en-de-pt-roberta-sentence-transformer | T-Systems-onsite | 2023-06-29T11:45:43Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"en",
"de",
"pt",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language:
- en
- de
- pt
license: mit
tags:
- sentence_embedding
--- |
qPilz/ppo-Huggy | qPilz | 2023-06-29T11:42:45Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-29T11:42:44Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: qPilz/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
GabrielNewell/ppo-Huggy | GabrielNewell | 2023-06-29T11:42:04Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-29T11:42:00Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: GabrielNewell/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
poisson-fish/ultralm-13b-GPTQ | poisson-fish | 2023-06-29T11:40:49Z | 10 | 1 | transformers | [
"transformers",
"llama",
"text-generation",
"dataset:stingning/ultrachat",
"arxiv:2305.14233",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-29T08:29:08Z | ---
datasets:
- stingning/ultrachat
---
This is [openbmb/UltraLM-13b](https://huggingface.co/openbmb/UltraLM-13b) recovered with [huggyllama/llama-13b](https://huggingface.co/huggyllama/llama-13b) and quantized to 4bit GPTQ with the following config:
```python
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=32,
desc_act=True,
)
```
# Original Model Card:
# UltraLM-13b
<!-- Provide a quick summary of what the model is/does. -->
This is UltraLM-13b delta weights, a chat language model trained upon [UltraChat](https://github.com/thunlp/UltraChat)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
The model is fine-tuned based on LLaMA-13b with a multi-turn chat-format template as below
```
User: instruction 1<eos_token>
Assistant: response 1<eos_token>
User: instruction 2<eos_token>
Assistant: response 2<eos_token>
...
```
- **License:** UltraLM is based on LLaMA and should be used under LLaMA's [model license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
- **Finetuned from model:** LLaMA-13b
- **Finetuned on data:** [UltraChat](https://github.com/thunlp/UltraChat)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [UltraChat](https://github.com/thunlp/UltraChat)
- **Paper:** [arxiv](https://arxiv.org/abs/2305.14233)
- **Demo:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
To use this model, you need to [recover](https://github.com/thunlp/UltraChat/tree/main/UltraLM) the full model from the delta weights and perform inference following the template below:
```
[Optional]User: system prompt<eos_token>
User: user input<eos_token>
Assistant:
```
|
ahishamm/vit-base-augmented-ph2-patch-16 | ahishamm | 2023-06-29T11:30:47Z | 206 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-29T11:21:44Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-augmented-ph2-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-augmented-ph2-patch-16
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ahishamm/Augmented_PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5420
- Accuracy: 0.8444
- Recall: 0.8444
- F1: 0.8444
- Precision: 0.8444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.0592 | 0.36 | 50 | 0.7161 | 0.8068 | 0.8068 | 0.8068 | 0.8068 |
| 0.0703 | 0.72 | 100 | 0.5420 | 0.8444 | 0.8444 | 0.8444 | 0.8444 |
| 0.0042 | 1.09 | 150 | 0.5557 | 0.8821 | 0.8821 | 0.8821 | 0.8821 |
| 0.0034 | 1.45 | 200 | 0.6464 | 0.8701 | 0.8701 | 0.8701 | 0.8701 |
| 0.0023 | 1.81 | 250 | 0.7943 | 0.8410 | 0.8410 | 0.8410 | 0.8410 |
| 0.0018 | 2.17 | 300 | 0.7109 | 0.8598 | 0.8598 | 0.8598 | 0.8598 |
| 0.0015 | 2.54 | 350 | 0.7254 | 0.8598 | 0.8598 | 0.8598 | 0.8598 |
| 0.0013 | 2.9 | 400 | 0.7364 | 0.8598 | 0.8598 | 0.8598 | 0.8598 |
| 0.0013 | 3.26 | 450 | 0.7438 | 0.8615 | 0.8615 | 0.8615 | 0.8615 |
| 0.0012 | 3.62 | 500 | 0.7489 | 0.8615 | 0.8615 | 0.8615 | 0.8615 |
| 0.0012 | 3.99 | 550 | 0.7506 | 0.8615 | 0.8615 | 0.8615 | 0.8615 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
schirmacher/ppo-LunarLander-v2 | schirmacher | 2023-06-29T11:29:58Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T10:34:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 286.87 +/- 15.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jvvelzen/Yaxi-v3_3 | jvvelzen | 2023-06-29T11:28:02Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T11:28:00Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Yaxi-v3_3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jvvelzen/Yaxi-v3_3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ce-dric/dqn-SpaceInvadersNoFrameskip-v4 | ce-dric | 2023-06-29T11:18:34Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T10:00:12Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 644.50 +/- 232.78
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ce-dric -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ce-dric -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ce-dric
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
desh2608/icefall-asr-tedlium3-zipformer | desh2608 | 2023-06-29T11:07:35Z | 0 | 0 | null | [
"tensorboard",
"en",
"dataset:tedlium3",
"license:apache-2.0",
"region:us"
] | null | 2023-06-16T05:41:06Z | ---
license: apache-2.0
datasets:
- tedlium3
language:
- en
metrics:
- wer
---
### TedLium3 Zipformer
**`rnnt_type=regular`**
The WERs are
| | dev | test | comment |
|------------------------------------|------------|------------|------------------------------------------|
| greedy search | 6.74 | 6.16 | --epoch 50, --avg 22, --max-duration 500 |
| beam search (beam size 4) | 6.56 | 5.95 | --epoch 50, --avg 22, --max-duration 500 |
| modified beam search (beam size 4) | 6.54 | 6.00 | --epoch 50, --avg 22, --max-duration 500 |
| fast beam search (set as default) | 6.91 | 6.28 | --epoch 50, --avg 22, --max-duration 500 |
The training command for reproducing is given below:
```
export CUDA_VISIBLE_DEVICES="0,1,2,3"
./zipformer/train.py \
--use-fp16 true \
--world-size 4 \
--num-epochs 50 \
--start-epoch 0 \
--exp-dir zipformer/exp \
--max-duration 1000
```
The tensorboard training log can be found at
https://tensorboard.dev/experiment/AKXbJha0S9aXyfmuvG4h5A/#scalars
The decoding command is:
```
epoch=50
avg=22
## greedy search
./zipformer/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir zipformer/exp \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 500
## beam search
./zipformer/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir zipformer/exp \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 500 \
--decoding-method beam_search \
--beam-size 4
## modified beam search
./zipformer/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir zipformer/exp \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 500 \
--decoding-method modified_beam_search \
--beam-size 4
## fast beam search
./zipformer/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir ./zipformer/exp \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 1500 \
--decoding-method fast_beam_search \
--beam 4 \
--max-contexts 4 \
--max-states 8
```
**`rnnt_type=modified`**
Using the codes from this PR https://github.com/k2-fsa/icefall/pull/1125.
The WERs are
| | dev | test | comment |
|------------------------------------|------------|------------|------------------------------------------|
| greedy search | 6.32 | 5.83 | --epoch 50, --avg 22, --max-duration 500 |
| modified beam search (beam size 4) | 6.16 | 5.79 | --epoch 50, --avg 22, --max-duration 500 |
| fast beam search (set as default) | 6.30 | 5.89 | --epoch 50, --avg 22, --max-duration 500 |
The training command for reproducing is given below:
```
export CUDA_VISIBLE_DEVICES="0,1,2,3"
./zipformer/train.py \
--use-fp16 true \
--world-size 4 \
--num-epochs 50 \
--start-epoch 0 \
--exp-dir zipformer/exp \
--max-duration 1000 \
--rnnt-type modified
```
The tensorboard training log can be found at
https://tensorboard.dev/experiment/3d4bYmbJTGiWQQaW88CVEQ/#scalars
The decoding commands are same as above. |
mcamara/ppo-LunarLander-v2 | mcamara | 2023-06-29T11:05:49Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T11:05:31Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.10 +/- 18.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ahishamm/vit-huge-isic-sharpened-patch-14 | ahishamm | 2023-06-29T11:03:59Z | 189 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-29T10:57:01Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-huge-isic-sharpened-patch-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-huge-isic-sharpened-patch-14
This model is a fine-tuned version of [google/vit-huge-patch14-224-in21k](https://huggingface.co/google/vit-huge-patch14-224-in21k) on the ahishamm/isic_sharpened_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5863
- Accuracy: 0.8056
- Recall: 0.8056
- F1: 0.8056
- Precision: 0.8056
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
TobiTob/decision_transformer_merged1 | TobiTob | 2023-06-29T11:02:34Z | 32 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"decision_transformer",
"generated_from_trainer",
"dataset:city_learn",
"endpoints_compatible",
"region:us"
] | null | 2023-06-29T10:38:25Z | ---
tags:
- generated_from_trainer
datasets:
- city_learn
model-index:
- name: decision_transformer_merged1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# decision_transformer_merged1
This model is a fine-tuned version of [](https://huggingface.co/) on the city_learn dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
linxxx3/test-model | linxxx3 | 2023-06-29T10:59:53Z | 0 | 0 | transformers | [
"transformers",
"mytag:1",
"license:artistic-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-28T07:32:21Z | ---
license: artistic-2.0
tags:
- mytag:1
library_name: transformers
--- |
ahishamm/vit-large-isic-sharpened-patch-32 | ahishamm | 2023-06-29T10:56:33Z | 191 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-29T10:50:53Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-isic-sharpened-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-isic-sharpened-patch-32
This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/isic_sharpened_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6395
- Accuracy: 0.7778
- Recall: 0.7778
- F1: 0.7778
- Precision: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
monkirai/FisioSalutValles | monkirai | 2023-06-29T10:51:33Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2023-06-29T10:50:17Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ahishamm/vit-large-isic-sharpened-patch-16 | ahishamm | 2023-06-29T10:50:35Z | 191 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-29T10:44:56Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-isic-sharpened-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-isic-sharpened-patch-16
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the ahishamm/isic_sharpened_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6853
- Accuracy: 0.75
- Recall: 0.75
- F1: 0.75
- Precision: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-base-isic-sharpened-patch-16 | ahishamm | 2023-06-29T10:39:18Z | 222 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-29T10:34:24Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-isic-sharpened-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-isic-sharpened-patch-16
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ahishamm/isic_sharpened_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6163
- Accuracy: 0.7639
- Recall: 0.7639
- F1: 0.7639
- Precision: 0.7639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
coreml-community/coreml-MeinaMix-v9_cn | coreml-community | 2023-06-29T10:36:16Z | 0 | 5 | null | [
"coreml",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-29T04:17:21Z | ---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
- `split_einsum` version is compatible with all compute unit options including Neural Engine.
- `original` version is only compatible with `CPU & GPU` option.
- Custom resolution versions are tagged accordingly.
- The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model.
- This model was converted with a `vae-encoder` for use with `image2image`.
- This model is `fp16`.
- Descriptions are posted as-is from original model source.
- Not all features and/or results may be available in `CoreML` format.
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
- This model does not include a `safety checker` (for NSFW content).
- This model can be used with ControlNet.
<br>
# MeinaMix-v9_cn:
Source(s): [CivitAI](https://civitai.com/models/7240?modelVersionId=46137)<br>
MeinaMix objective is to be able to do good art with little prompting.
I created a discord server where you can post images that you generated, discuss prompt and/or ask for help. https://discord.gg/meinaverse
I also have a ko-fi and Patreon page where you can support me or buy me a coffee <3 , it will be very much appreciated:
https://ko-fi.com/meina and https://www.patreon.com/MeinaMix
MeinaMix is officially hosted for online generation in
- Sinkin.ai
- Magespace
- Tensor
- Dazzleai
MeinaMix and the other of Meinas will ALWAYS be FREE.
<br><br>
## Recommendations of use:
Enable Quantization in K samplers.
Hires.fix is needed for prompts where the character is far away in order to make decent images, it drastically improve the quality of face and eyes!
Sampler: Euler a: 40~60 steps
Sampler: DPM++ SDE Karras: 30~60 steps
CFG Scale: 7
Resolutions: 512x768, 512x1024 for Portrait
Resolutions: 768x512, 1024x512, 1536x512 for Landscape
Hires.fix: R-ESRGAN 4x+Anime6b, with 10 steps at 0.1 up to 0.3 denoising
Clip Skip: 2
Negatives: (worst quality:2, low quality:2), (zombie, sketch, interlocked fingers, comic)



 |
qPilz/ppo-LunarLander-v2 | qPilz | 2023-06-29T10:34:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T10:34:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -1491.00 +/- 954.99
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Malaika/rl_course_vizdoom_health_gathering_supreme | Malaika | 2023-06-29T10:27:45Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T10:27:38Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.21 +/- 2.37
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Malaika/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
vlkn/falcon_instruct_deft | vlkn | 2023-06-29T10:08:43Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-06-29T09:24:12Z | ---
tags:
- generated_from_trainer
model-index:
- name: falcon_instruct_deft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_instruct_deft
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 300
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nitinjainbotstar/alpaca | nitinjainbotstar | 2023-06-29T10:07:10Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-29T10:07:06Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
alfajmahabri/qr | alfajmahabri | 2023-06-29T10:06:23Z | 0 | 1 | null | [
"region:us"
] | null | 2023-06-29T10:01:40Z | title: QR Code AI Art Generator
emoji: 📱🔲
colorFrom: MediumSeaGreen
colorTo: CornflowerBlue
sdk: gradio
sdk_version: 3.35.2
app_file: app.py
pinned: false
suggested_hardware: t4-medium
startup_duration_timeout: 1h
duplicated_from: huggingface-projects/QR-code-AI-art-generator |
julien-c/EsperBERTo-small-pos | julien-c | 2023-06-29T09:49:17Z | 106 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"onnx",
"safetensors",
"roberta",
"token-classification",
"eo",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language: eo
thumbnail: https://huggingface.co/blog/assets/01_how-to-train/EsperBERTo-thumbnail-v2.png
widget:
- text: "Mi estas viro kej estas tago varma."
---
# EsperBERTo: RoBERTa-like Language model trained on Esperanto
**Companion model to blog post https://huggingface.co/blog/how-to-train** 🔥
## Training Details
- current checkpoint: 566000
- machine name: `galinette`

## Example pipeline
```python
from transformers import TokenClassificationPipeline, pipeline
MODEL_PATH = "./models/EsperBERTo-small-pos/"
nlp = pipeline(
"ner",
model=MODEL_PATH,
tokenizer=MODEL_PATH,
)
# or instantiate a TokenClassificationPipeline directly.
nlp("Mi estas viro kej estas tago varma.")
# {'entity': 'PRON', 'score': 0.9979867339134216, 'word': ' Mi'}
# {'entity': 'VERB', 'score': 0.9683094620704651, 'word': ' estas'}
# {'entity': 'VERB', 'score': 0.9797462821006775, 'word': ' estas'}
# {'entity': 'NOUN', 'score': 0.8509314060211182, 'word': ' tago'}
# {'entity': 'ADJ', 'score': 0.9996201395988464, 'word': ' varma'}
``` |
Jumartineze/bert-base-spanish-wwm-uncased-finetuned-MeIA-AnalisisDeSentimientos | Jumartineze | 2023-06-29T09:45:59Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-27T05:54:49Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-spanish-wwm-uncased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-uncased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9394
- F1: 0.5876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9694 | 1.0 | 837 | 0.9393 | 0.5784 |
| 0.825 | 2.0 | 1674 | 0.9394 | 0.5876 |
| 0.6932 | 3.0 | 2511 | 0.9883 | 0.5870 |
| 0.5868 | 4.0 | 3348 | 1.0267 | 0.5864 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Matthijs/mms-tts-kor | Matthijs | 2023-06-29T09:37:36Z | 139 | 2 | transformers | [
"transformers",
"pytorch",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2023-06-27T13:18:15Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS) : Text-to-Speech Models
This repository contains the **Korean (kor)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage
Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html).
## Usage
Using this checkpoint from Hugging Face Transformers:
```python
from transformers import VitsModel, VitsMmsTokenizer
import torch
model = VitsModel.from_pretrained("Matthijs/mms-tts-kor")
tokenizer = VitsMmsTokenizer.from_pretrained("Matthijs/mms-tts-kor")
text = "some example text in the Korean language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs)
from IPython.display import Audio
Audio(output.audio[0], rate=16000)
```
Note: For this checkpoint, the input text must be converted to the Latin alphabet first using the [uroman](https://github.com/isi-nlp/uroman) tool.
## Model credits
This model was developed by Vineel Pratap et al. and is licensed as **CC-BY-NC 4.0**
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
|
dhkim2810/MobileSAM | dhkim2810 | 2023-06-29T09:34:09Z | 0 | 21 | null | [
"arxiv:2306.14289",
"arxiv:2304.02643",
"license:mit",
"region:us"
] | null | 2023-06-28T04:10:23Z | ---
license: mit
---
# Faster Segement Anything (MobileSAM)
<!-- Provide a quick summary of what the model is/does. -->
- **Repository:** [Github - MobileSAM](https://github.com/ChaoningZhang/MobileSAM)
- **Paper:** [Faster Segment Anything: Towards Lightweight SAM for Mobile Applications](https://arxiv.org/pdf/2306.14289.pdf)
- **Demo:** [HuggingFace Demo](https://huggingface.co/spaces/dhkim2810/MobileSAM)
**MobileSAM** performs on par with the original SAM (at least visually) and keeps exactly the same pipeline as the original SAM except for a change on the image encoder. Specifically, we replace the original heavyweight ViT-H encoder (632M) with a much smaller Tiny-ViT (5M). On a single GPU, MobileSAM runs around 12ms per image: 8ms on the image encoder and 4ms on the mask decoder.
The comparison of ViT-based image encoder is summarzed as follows:
Image Encoder | Original SAM | MobileSAM
:------------:|:-------------:|:---------:
Paramters | 611M | 5M
Speed | 452ms | 8ms
Original SAM and MobileSAM have exactly the same prompt-guided mask decoder:
Mask Decoder | Original SAM | MobileSAM
:-----------------------------------------:|:---------:|:-----:
Paramters | 3.876M | 3.876M
Speed | 4ms | 4ms
The comparison of the whole pipeline is summarzed as follows:
Whole Pipeline (Enc+Dec) | Original SAM | MobileSAM
:-----------------------------------------:|:---------:|:-----:
Paramters | 615M | 9.66M
Speed | 456ms | 12ms
## Acknowledgement
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
<details>
<summary>
<a href="https://github.com/facebookresearch/segment-anything">SAM</a> (Segment Anything) [<b>bib</b>]
</summary>
```bibtex
@article{kirillov2023segany,
title={Segment Anything},
author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
journal={arXiv:2304.02643},
year={2023}
}
```
</details>
<details>
<summary>
<a href="https://github.com/microsoft/Cream/tree/main/TinyViT">TinyViT</a> (TinyViT: Fast Pretraining Distillation for Small Vision Transformers) [<b>bib</b>]
</summary>
```bibtex
@InProceedings{tiny_vit,
title={TinyViT: Fast Pretraining Distillation for Small Vision Transformers},
author={Wu, Kan and Zhang, Jinnian and Peng, Houwen and Liu, Mengchen and Xiao, Bin and Fu, Jianlong and Yuan, Lu},
booktitle={European conference on computer vision (ECCV)},
year={2022}
```
</details>
**BibTeX:**
```bibtex
@article{mobile_sam,
title={Faster Segment Anything: Towards Lightweight SAM for Mobile Applications},
author={Zhang, Chaoning and Han, Dongshen and Qiao, Yu and Kim, Jung Uk and Bae, Sung Ho and Lee, Seungkyu and Hong, Choong Seon},
journal={arXiv preprint arXiv:2306.14289},
year={2023}
}
``` |
mrbingzhao/macbert4csc-cn | mrbingzhao | 2023-06-29T09:25:19Z | 3 | 0 | transformers | [
"transformers",
"bert",
"fill-mask",
"pytorch",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-06-28T08:50:46Z | ---
language:
- zh
tags:
- bert
- pytorch
- zh
license: "apache-2.0"
---
# MacBERT for Chinese Spelling Correction(macbert4csc) Model
中文拼写纠错模型
`macbert4csc-base-chinese` evaluate SIGHAN2015 test data:
- Char Level: precision:0.9372, recall:0.8640, f1:0.8991
- Sentence Level: precision:0.8264, recall:0.7366, f1:0.7789
由于训练使用的数据使用了SIGHAN2015的训练集(复现paper),在SIGHAN2015的测试集上达到SOTA水平。
模型结构,魔改于softmaskedbert:

## Usage
本项目开源在中文文本纠错项目:[pycorrector](https://github.com/shibing624/pycorrector),可支持macbert4csc模型,通过如下命令调用:
```python
from pycorrector.macbert.macbert_corrector import MacBertCorrector
nlp = MacBertCorrector("shibing624/macbert4csc-base-chinese").macbert_correct
i = nlp('今天新情很好')
print(i)
```
当然,你也可使用官方的huggingface/transformers调用:
*Please use 'Bert' related functions to load this model!*
```python
import operator
import torch
from transformers import BertTokenizer, BertForMaskedLM
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = BertTokenizer.from_pretrained("shibing624/macbert4csc-base-chinese")
model = BertForMaskedLM.from_pretrained("shibing624/macbert4csc-base-chinese")
model.to(device)
texts = ["今天新情很好", "你找到你最喜欢的工作,我也很高心。"]
with torch.no_grad():
outputs = model(**tokenizer(texts, padding=True, return_tensors='pt').to(device))
def get_errors(corrected_text, origin_text):
sub_details = []
for i, ori_char in enumerate(origin_text):
if ori_char in [' ', '“', '”', '‘', '’', '琊', '\n', '…', '—', '擤']:
# add unk word
corrected_text = corrected_text[:i] + ori_char + corrected_text[i:]
continue
if i >= len(corrected_text):
continue
if ori_char != corrected_text[i]:
if ori_char.lower() == corrected_text[i]:
# pass english upper char
corrected_text = corrected_text[:i] + ori_char + corrected_text[i + 1:]
continue
sub_details.append((ori_char, corrected_text[i], i, i + 1))
sub_details = sorted(sub_details, key=operator.itemgetter(2))
return corrected_text, sub_details
result = []
for ids, text in zip(outputs.logits, texts):
_text = tokenizer.decode(torch.argmax(ids, dim=-1), skip_special_tokens=True).replace(' ', '')
corrected_text = _text[:len(text)]
corrected_text, details = get_errors(corrected_text, text)
print(text, ' => ', corrected_text, details)
result.append((corrected_text, details))
print(result)
```
output:
```shell
今天新情很好 => 今天心情很好 [('新', '心', 2, 3)]
你找到你最喜欢的工作,我也很高心。 => 你找到你最喜欢的工作,我也很高兴。 [('心', '兴', 15, 16)]
```
模型文件组成:
```
macbert4csc-base-chinese
├── config.json
├── added_tokens.json
├── pytorch_model.bin
├── special_tokens_map.json
├── tokenizer_config.json
└── vocab.txt
```
### 训练数据集
#### SIGHAN+Wang271K中文纠错数据集
| 数据集 | 语料 | 下载链接 | 压缩包大小 |
| :------- | :--------- | :---------: | :---------: |
| **`SIGHAN+Wang271K中文纠错数据集`** | SIGHAN+Wang271K(27万条) | [百度网盘(密码01b9)](https://pan.baidu.com/s/1BV5tr9eONZCI0wERFvr0gQ)| 106M |
| **`原始SIGHAN数据集`** | SIGHAN13 14 15 | [官方csc.html](http://nlp.ee.ncu.edu.tw/resource/csc.html)| 339K |
| **`原始Wang271K数据集`** | Wang271K | [Automatic-Corpus-Generation dimmywang提供](https://github.com/wdimmy/Automatic-Corpus-Generation/blob/master/corpus/train.sgml)| 93M |
SIGHAN+Wang271K中文纠错数据集,数据格式:
```json
[
{
"id": "B2-4029-3",
"original_text": "晚间会听到嗓音,白天的时候大家都不会太在意,但是在睡觉的时候这嗓音成为大家的恶梦。",
"wrong_ids": [
5,
31
],
"correct_text": "晚间会听到噪音,白天的时候大家都不会太在意,但是在睡觉的时候这噪音成为大家的恶梦。"
},
]
```
```shell
macbert4csc
├── config.json
├── pytorch_model.bin
├── special_tokens_map.json
├── tokenizer_config.json
└── vocab.txt
```
如果需要训练macbert4csc,请参考[https://github.com/shibing624/pycorrector/tree/master/pycorrector/macbert](https://github.com/shibing624/pycorrector/tree/master/pycorrector/macbert)
### About MacBERT
**MacBERT** is an improved BERT with novel **M**LM **a**s **c**orrection pre-training task, which mitigates the discrepancy of pre-training and fine-tuning.
Here is an example of our pre-training task.
| task | Example |
| -------------- | ----------------- |
| **Original Sentence** | we use a language model to predict the probability of the next word. |
| **MLM** | we use a language [M] to [M] ##di ##ct the pro [M] ##bility of the next word . |
| **Whole word masking** | we use a language [M] to [M] [M] [M] the [M] [M] [M] of the next word . |
| **N-gram masking** | we use a [M] [M] to [M] [M] [M] the [M] [M] [M] [M] [M] next word . |
| **MLM as correction** | we use a text system to ca ##lc ##ulate the po ##si ##bility of the next word . |
Except for the new pre-training task, we also incorporate the following techniques.
- Whole Word Masking (WWM)
- N-gram masking
- Sentence-Order Prediction (SOP)
**Note that our MacBERT can be directly replaced with the original BERT as there is no differences in the main neural architecture.**
For more technical details, please check our paper: [Revisiting Pre-trained Models for Chinese Natural Language Processing](https://arxiv.org/abs/2004.13922)
## Citation
```latex
@software{pycorrector,
author = {Xu Ming},
title = {pycorrector: Text Error Correction Tool},
year = {2021},
url = {https://github.com/shibing624/pycorrector},
}
```
|
msladic/ppo-MSLunarLander-v2 | msladic | 2023-06-29T09:22:20Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-28T13:05:55Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.76 +/- 20.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YakovElm/Qt_20_BERT_Over_Sampling | YakovElm | 2023-06-29T09:08:25Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-29T09:07:50Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_20_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_20_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0158
- Train Accuracy: 0.9940
- Validation Loss: 0.3047
- Validation Accuracy: 0.9359
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3430 | 0.8260 | 0.2257 | 0.9205 | 0 |
| 0.0359 | 0.9884 | 0.3111 | 0.9213 | 1 |
| 0.0158 | 0.9940 | 0.3047 | 0.9359 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nomad-ai/rl_course_vizdoom_health_gathering_supreme | nomad-ai | 2023-06-29T09:03:02Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T09:02:54Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.97 +/- 4.35
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r nomad-ai/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
YeungNLP/firefly-baichuan-7b | YeungNLP | 2023-06-29T08:59:36Z | 17 | 9 | transformers | [
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-26T10:01:48Z | QLoRA+百万数据对baichun-7b模型进行高效指令微调
更多详情请查看Github项目: [Firefly(流萤): 中文对话式大语言模型(全量微调+QLoRA)](https://github.com/yangjianxin1/Firefly)
单轮对话脚本:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = 'YeungNLP/firefly-baichuan-7b-qlora-sft-merge'
max_new_tokens = 500
top_p = 0.9
temperature = 0.35
repetition_penalty = 1.0
device = 'cuda'
input_pattern = '<s>{}</s>'
model = AutoModelForCausalLM.from_pretrained(
model_name,
trust_remote_code=True,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
device_map='auto'
)
model.eval()
model = model.to(device)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
text = input('User:')
while True:
text = input_pattern.format(text)
input_ids = tokenizer(text, return_tensors="pt").input_ids
input_ids = input_ids.to(device)
outputs = model.generate(
input_ids=input_ids, max_new_tokens=max_new_tokens, do_sample=True,
top_p=top_p, temperature=temperature, repetition_penalty=repetition_penalty,
eos_token_id=tokenizer.eos_token_id
)
rets = tokenizer.batch_decode(outputs)
output = rets[0].strip().replace(text, "").replace('</s>', "")
print("Firefly:{}".format(output))
text = input('User:')
```
多轮对话脚本:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = 'cuda'
model_name = 'YeungNLP/firefly-baichuan-7b1-qlora-sft-merge'
max_new_tokens = 500
top_p = 0.9
temperature = 0.35
repetition_penalty = 1.0
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
trust_remote_code=True,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
device_map='auto'
)
model.eval()
model = model.to(device)
# 记录所有历史记录
history_token_ids = tokenizer('<s>', return_tensors="pt").input_ids
# 输入模型的最大长度
history_max_len = 1000
user_input = input('User:')
while True:
user_input = '{}</s>'.format(user_input)
user_input_ids = tokenizer(user_input, return_tensors="pt").input_ids
history_token_ids = torch.concat((history_token_ids, user_input_ids), dim=1)
model_input_ids = history_token_ids[:, -history_max_len:].to(device)
outputs = model.generate(
input_ids=model_input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p=top_p,
temperature=temperature, repetition_penalty=repetition_penalty, eos_token_id=tokenizer.eos_token_id
)
model_input_ids_len = model_input_ids.size(1)
response_ids = outputs[:, model_input_ids_len:]
history_token_ids = torch.concat((history_token_ids, response_ids.cpu()), dim=1)
response = tokenizer.batch_decode(response_ids)
print("Firefly:" + response[0].strip().replace('</s>', ""))
user_input = input('User:')
```
|
Shrawani/squad-bloom-1b7-v1 | Shrawani | 2023-06-29T08:51:39Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-29T08:51:37Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
hseokool/vicuna-7b-1.1-230623-01 | hseokool | 2023-06-29T08:46:17Z | 7 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-29T08:46:14Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
kph-keewalpass/23 | kph-keewalpass | 2023-06-29T08:28:32Z | 0 | 0 | open_clip | [
"open_clip",
"art",
"text-to-image",
"en",
"hi",
"dataset:tiiuae/falcon-refinedweb",
"license:bigscience-openrail-m",
"region:us"
] | text-to-image | 2023-06-29T08:14:56Z | ---
license: bigscience-openrail-m
datasets:
- tiiuae/falcon-refinedweb
language:
- en
- hi
library_name: open_clip
pipeline_tag: text-to-image
tags:
- art
--- |
zhyemmmm/Babes | zhyemmmm | 2023-06-29T08:27:42Z | 29 | 0 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-29T08:22:11Z | ---
license: creativeml-openrail-m
---
|
p120/paul | p120 | 2023-06-29T08:22:40Z | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-29T08:19:03Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### paul Dreambooth model trained by p120 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
JacobHenry/Pleasantnoise | JacobHenry | 2023-06-29T08:07:55Z | 0 | 0 | null | [
"Langchain",
"OpenAI API",
"code",
"csv",
"conversation starter",
"document-question-answering",
"en",
"license:unknown",
"region:us"
] | document-question-answering | 2023-06-28T08:44:17Z | ---
license: unknown
language:
- en
pipeline_tag: document-question-answering
tags:
- Langchain
- OpenAI API
- code
- csv
- conversation starter
--- |
zhyemmmm/Cartoonish | zhyemmmm | 2023-06-29T08:04:53Z | 29 | 0 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-29T07:59:33Z | ---
license: creativeml-openrail-m
---
|
r45289/finetuned-bert-chinese-base | r45289 | 2023-06-29T07:54:13Z | 109 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:peoples_daily_ner",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-29T03:04:31Z | ---
tags:
- generated_from_trainer
datasets:
- peoples_daily_ner
metrics:
- f1
model-index:
- name: finetuned-bert-chinese-base
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: peoples_daily_ner
type: peoples_daily_ner
config: peoples_daily_ner
split: validation
args: peoples_daily_ner
metrics:
- name: F1
type: f1
value: 0.957080981756136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-chinese-base
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the peoples_daily_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0185
- F1: 0.9571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0494 | 1.0 | 1739 | 0.0250 | 0.9283 |
| 0.0146 | 2.0 | 3478 | 0.0202 | 0.9505 |
| 0.0051 | 3.0 | 5217 | 0.0185 | 0.9571 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
zhyemmmm/FuwaFuwaMix | zhyemmmm | 2023-06-29T07:50:57Z | 29 | 0 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-29T07:45:19Z | ---
license: creativeml-openrail-m
---
|
bash99/Ziya-LLaMA-13B-v1-GPTQ | bash99 | 2023-06-29T07:48:37Z | 6 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-27T04:09:36Z | Convert use Auto-GPTQ from WHJ1998/Ziya-LLaMA-13B-v1 |
jyarac/bert-base-multilingual-uncased-sentiment-MeIA | jyarac | 2023-06-29T07:33:28Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-29T04:43:23Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-uncased-sentiment-MeIA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-sentiment-MeIA
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.0751
- eval_f1: 0.5932
- eval_runtime: 74.8554
- eval_samples_per_second: 70.135
- eval_steps_per_second: 2.204
- epoch: 4.0
- step: 1532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
thehonestbob/mrasp2 | thehonestbob | 2023-06-29T07:16:21Z | 157 | 2 | transformers | [
"transformers",
"pytorch",
"bart",
"feature-extraction",
"custom_code",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-06-29T07:03:37Z | ## 一、项目介绍
此项目是参考github上优秀的机器翻译项目[mRASP2](https://github.com/PANXiao1994/mRASP2),将官方开源的fairseq预训练权重改写为transformers架构,使其能够更加方便使用。
## 二、使用方法
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_path = 'thehonestbob/mrasp2'
model = AutoModelForSeq2SeqLM.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path)
input_text = ["Welcome to download and use!"]
inputs = tokenizer(input_text, return_tensors="pt", padding=True, max_length=1024, truncation=True)
result = model.generate(**inputs)
result = tokenizer.batch_decode(result, skip_special_tokens=True)
result = [pre.strip() for pre in result]
# ['欢迎下载和使用!']
```
## 三、使用说明
该模型支持32种语言,更多详细参考[mRASP2](https://github.com/PANXiao1994/mRASP2),此模型库的tokenizer仅针对中英双语进行优化,如果需要使用其他语言请
自行参考tokenization_bat.py进行修改。请注意,这是官方的6e6d-no-mono模型,12e12d两个模型暂时无法实现,找不到原因,如果有知道的小伙伴可以分享出来。
## 四、其他模型
[thehonestbob/mrasp](https://huggingface.co/thehonestbob/mrasp) |
nolanaatama/rccrtmnsthprkrvcv2450pchrys | nolanaatama | 2023-06-29T07:05:39Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-29T07:02:14Z | ---
license: creativeml-openrail-m
---
|
bravesong/distilbert-base-uncased-finetuned-emotion | bravesong | 2023-06-29T07:00:07Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-29T06:26:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240252098521805
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2195
- Accuracy: 0.924
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8552 | 1.0 | 250 | 0.3235 | 0.904 | 0.9013 |
| 0.2534 | 2.0 | 500 | 0.2195 | 0.924 | 0.9240 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Ducco/ppo-Huggy | Ducco | 2023-06-29T06:49:11Z | 21 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-29T06:49:01Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Ducco/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
YakovElm/Qt_15_BERT_Over_Sampling | YakovElm | 2023-06-29T06:29:15Z | 63 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-29T06:28:39Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_15_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_15_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0356
- Train Accuracy: 0.9882
- Validation Loss: 0.2948
- Validation Accuracy: 0.9392
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4936 | 0.7488 | 0.5032 | 0.7762 | 0 |
| 0.1037 | 0.9668 | 0.3057 | 0.9262 | 1 |
| 0.0356 | 0.9882 | 0.2948 | 0.9392 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
johacbeg/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-ACMe | johacbeg | 2023-06-29T06:26:54Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-29T05:57:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-ACMe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-ACMe
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1261
- F1: 0.5484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0807 | 1.0 | 2450 | 1.0517 | 0.5104 |
| 0.9141 | 2.0 | 4900 | 1.0769 | 0.5337 |
| 0.7355 | 3.0 | 7350 | 1.1261 | 0.5484 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
artificialguybr/Liberte | artificialguybr | 2023-06-29T06:26:36Z | 0 | 3 | null | [
"text-to-image",
"stable-diffusion",
"license:bigscience-openrail-m",
"region:us"
] | text-to-image | 2023-06-19T22:52:54Z | ---
license: bigscience-openrail-m
tags:
- text-to-image
- stable-diffusion
---
**Liberte.Redmond is here!**
You can currently test it at this link thanks to the makeai.run API.
https://huggingface.co/spaces/artificialguybr/liberte/
I'm grateful for the GPU time from **Redmond.AI** that allowed me to finish this model!
**This is a generalist model fine-tuned on SD 1.5.**
The model has a high capacity to generate realistic, artistic images, cars, people, and a wide variety of themes. It's a versatile model.
This model will serve as the basis for a dozen models and LORAs that will come specialized in specific themes.
I recommend testing some prompts with or without negative prompts as there are cases where the results are also interesting without negatives.
I highly recommend DPM+ SDE/2M or 2M SDE settings with 30 Steps.
I really hope you like the model and use it.
If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi.
Patreon:https://www.patreon.com/user?u=81570187
Ko-fi:https://ko-fi.com/jvkape
I want to give a huge thanks to the people who helped me these past three months:
Mousewrites, PeePa, Kaz, Queria Star Morta, theovercomer8, Nawnie, Freon, Kohya.
Follow me on Twitter to have acess before for the future models:
https://twitter.com/artificialguybr |
Shrawani/squad-bloom-3b | Shrawani | 2023-06-29T06:26:02Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-29T06:07:05Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
rhovhannisyan/dmr-invoice-extractor | rhovhannisyan | 2023-06-29T06:21:48Z | 141 | 7 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"donut",
"image-to-text",
"vision",
"invoices",
"arxiv:2111.15664",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | image-to-text | 2023-06-28T11:46:01Z | ---
license: cc-by-nc-sa-4.0
tags:
- donut
- image-to-text
- vision
- invoices
---
# Donut finetuned on invoices
Based on Donut base model (introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
The model was trained with a few thousand of annotated invoices and non-invoices (for those the doctype will be 'Other'). They span across different countries and languages. They are always one page only. The dataset is proprietary unfortunately. Model is set to input resolution of 1280x1920 pixels. So any sample you want to try with higher dpi than 150 has no added value.
It was trained for about 4 hours on a NVIDIA RTX A4000 for 20k steps with a val_metric of 0.03413819904382196 at the end.
The following indexes were included in the train set:
DocType
Currency
DocumentDate
GrossAmount
InvoiceNumber
NetAmount
TaxAmount
OrderNumber
CreditorCountry
## Model description
Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.

### How to use
Look at the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples.
|
marip/bert-base-finetuned-ynat | marip | 2023-06-29T06:17:59Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-29T05:48:53Z | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: bert-base-finetuned-ynat
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
config: ynat
split: validation
args: ynat
metrics:
- name: F1
type: f1
value: 0.8700870690771503
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-ynat
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3653
- F1: 0.8701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 179 | 0.4209 | 0.8587 |
| No log | 2.0 | 358 | 0.3721 | 0.8677 |
| 0.3779 | 3.0 | 537 | 0.3607 | 0.8686 |
| 0.3779 | 4.0 | 716 | 0.3659 | 0.8688 |
| 0.3779 | 5.0 | 895 | 0.3653 | 0.8701 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jhleee/my_awesome_model | jhleee | 2023-06-29T06:14:10Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-29T05:26:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0659
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 89 | 0.1484 | 1.0 |
| No log | 2.0 | 178 | 0.0659 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
johacbeg/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos | johacbeg | 2023-06-29T06:13:07Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-28T15:50:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0243
- F1: 0.5441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8871 | 1.0 | 766 | 1.0243 | 0.5441 |
| 0.9119 | 2.0 | 1532 | 1.0243 | 0.5441 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hoaio/Reinforce-PixelcopterEnv-v1 | hoaio | 2023-06-29T05:34:53Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T05:34:49Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelcopterEnv-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 50.30 +/- 41.66
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
saisamarth/bloom-7b1-codev1 | saisamarth | 2023-06-29T05:17:51Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-29T05:16:58Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
PrarthanaJ/text_2_image_converision | PrarthanaJ | 2023-06-29T05:10:56Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-06-29T05:10:56Z | ---
license: bigscience-bloom-rail-1.0
---
|
coreml-community/coreml-epicrealism-pureEvolution-V3_cn | coreml-community | 2023-06-29T04:55:36Z | 0 | 8 | null | [
"coreml",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-28T05:20:30Z | ---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
- `split_einsum` version is compatible with all compute unit options including Neural Engine.
- `original` version is only compatible with `CPU & GPU` option.
- Custom resolution versions are tagged accordingly.
- The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model.
- This model was converted with a `vae-encoder` for use with `image2image`.
- This model is `fp16`.
- Descriptions are posted as-is from original model source.
- Not all features and/or results may be available in `CoreML` format.
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
- This model does not include a `safety checker` (for NSFW content).
- This model can be used with ControlNet.
<br>
# epiCRealism-pureEvolution-V3_cn:
Source(s): [CivitAI](https://civitai.com/models/25694/epicrealism)<br>
## V3 is here!
Since SDXL is right around the corner, let's say it is the final version for now since I put a lot effort into it and probably cannot do much more.
I tried to refine the understanding of the Prompts, Hands and of course the Realism.
Let's see what you guys can do with it.
Thanks to @drawaline for the in-depth review, so i'd like to give some advices to use this model.
## Advices
Use simple prompts
No need to use keywords like "masterpiece, best quality, 8k, intricate, high detail" or "(extremely detailed face), (extremely detailed hands), (extremely detailed hair)" since it doesn't produce appreciable change
Use simple negatives or small negative embeddings. gives most realistic look (check samples to get an idea of negatives i used)
Add "asian, chinese" to negative if you're looking for ethnicities other than Asian
Light, shadows, and details are excellent without extra keywords
If you're looking for a natural effect, avoid "cinematic"
Avoid using "1girl" since it pushes things to render/anime style
To much description of the face will turn out bad mostly
For a more fantasy like output use 2M Karras Sampler
No extra noise-offset needed, but u can if you like to 😉
## How to use?
Prompt: simple explanation of the image (try first without extra keywords)
Negative: "cartoon, painting, illustration, (worst quality, low quality, normal quality:2)"
Steps: >20 (if image has errors or artefacts use higher Steps)
CFG Scale: 5 (higher config scale can lose realism, depends on prompt, sampler and Steps)
Sampler: Any Sampler (SDE, DPM-Sampler will result in more realism)
Size: 512x768 or 768x512
Hires upscaler: 4x_NMKD-Superscale-SP_178000_G (Denoising: 0.35, Upscale: 2x)<br><br>
%20BREAK_half%20body%20portrait%20of%20a%20young%2020yo%20woman,%20black%20hair,%20wearing%20a%20summer%20dress%20BREAK_det.jpeg)
,%20(smile_0.5),%20wearing%20relaxed%20shirt%20and%20trousers,%20causal%20cloth.jpeg)
.jpeg)
 |
sleepynlp/dqn-SpaceInvadersNoFrameskip-v4-leo | sleepynlp | 2023-06-29T04:39:10Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T04:38:32Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 604.50 +/- 141.16
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sleepynlp -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sleepynlp -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sleepynlp
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Nikit2580/Doremon | Nikit2580 | 2023-06-29T04:32:26Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-06-29T04:32:26Z | ---
license: bigcode-openrail-m
---
|
coreml-community/coreml-ghostmix-v20-bakedVAE_cn | coreml-community | 2023-06-29T04:13:23Z | 0 | 4 | null | [
"coreml",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-28T23:03:30Z | ---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
- `split_einsum` version is compatible with all compute unit options including Neural Engine.
- `original` version is only compatible with `CPU & GPU` option.
- Custom resolution versions are tagged accordingly.
- The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model.
- This model was converted with a `vae-encoder` for use with `image2image`.
- This model is `fp16`.
- Descriptions are posted as-is from original model source.
- Not all features and/or results may be available in `CoreML` format.
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
- This model does not include a `safety checker` (for NSFW content).
- This model can be used with ControlNet.
<br>
# ghostmix-v20-bakedVAE_cn:
Source(s): [CivitAI](https://civitai.com/models/36520/ghostmix)<br>
If you like GhostMix , please give it a 5-star reviews, thank you.
You can run GhostMix on the cloud at Mage & Tusi.art & SinkIn.ai:
Mage provides unlimited generations for GhostMix with amazing features.
GhostMix V2 at: https://www.mage.space/model/ghostmix-v2
tusi.art 可以免费用我的模型跑图,模型连接: https://tusi.art/models/601380436024757697
SinkIn.ai GhostMix model at: https://sinkin.ai/m/DY5rYnx
## IMPORTANT MATTERS (重要事项)
I think compacity is the most important thing of a checkpoint, that's why I don't merge ANY LORA in GhostMix. Checkpoint solves the CAN DO problem and Lora solves the DO IT RIGHT problem. (我认为checkpoint最终要的是兼容性,所以我没有融任何lora进checkpoint,checkpoint应该解决的是做的到的问题,而lora解决的是做的对的问题)
Highres-Fix is A Must! Highres-Fix: 2x, denoising:0.4-0.5 or 1.5x, denoising:0.5-0.65. (一定要做高清修复! 高清修复: 2倍, 重绘幅度:0.4-0.5 或 1.5倍, 重绘幅度:0.5-0.65)
Make Sure you are in the right CLIP if you want to replicate my job, some themes are CLIP=1, while others are CLIP=2. Suggest download the image and put it into PNG info to check the setting (如果想要复现,确保CLIP值要对! CLIP1和CLIP2要对!建议把图下下来然后放到PNG信息里面去查设置)
Most Prompts in Previous Version of GhostMix can produce similiar result in New Version of GhostMix (之前用的大多数Prompts在新版本也可以生成相似的结果)
Textual Inversion&VAE: ng_deepnegative_v1_75t and easynegative ,don't use Bad-Hand V4 & V5!(用 ng_deepnegative_v1_75t和easynegative,别用BadHandV4,V5)
Sampler Suggest: DPM++ series, Steps: 20-30, CFG:5-7(7 is best)(采样方法建议 DPM++系列, 步数20-30, CFG:5-7(7最好) )
Suggest resolution: 512,768! mechanical girl theme is very sensitive to the resolution, not suggest make the aspect ratio too low.(建议分辨率:512,768! 机械少女主题对分辨率设置非常敏感,不建议设太低的宽高比)
If you want to support me, please buy me a coffee : https://ko-fi.com/ghostshell
如果想支持我,可以买杯咖啡给我:https://ko-fi.com/ghostshell
国内支付宝、微信用户可以通过爱发电给我买杯咖啡:https://afdian.net/a/ghostmix
## 2023.5.21 GhostMix-V2.0 (fp16 pruned VAE replaced)
## UPDATE DETAIL (中文更新说明在下面)
Hello everyone, this is Ghost_Shell, the creator. The GhostMix-V2.0 significantly improves the realism of faces and also greatly increases the good image rate. In my tests at 512,768 resolution, the good image rate of the Prompts I used before was above 50%. It is more user-friendly. During making the GhostMix-V2.0, I adjusted 47 versions of the model and finally chose one of them.
大家好,这里是作者Ghost_Shell。这次GhostMix-V2.0大幅提升了脸的真实性,也大幅提升了良图率,在我测的512,768分辨率之下,之前用的Prompts良图率都在50%以上,对用户更加友好。这次在测试中一共调了47个版本的模型,最终选了一个。
## Other Words I want to say (题外话):
To be honest, this may be the last version of GhostMix.On one hand, it is really inefficient to use 3060ti to make models. In the past two weeks, I have almost no free time except for making models and testing them. On the other hand, I temporarily feel that this model is almost at its limit and the space for improvement is really not high. I hope you like it. If you like the model, I hope you can post your images to Civitai. Many of the prompts I tested for GhostMix-V2.0 are from your posts, which is really important for me to test the model. If you can give it a 5-star rating, that would be great. If you are willing to support my work, please click: https://ko-fi.com/ghostshell. My goal is to buy a 4070 and work more efficiently on making models. Using a 3060ti to make models is really inefficient and it basically cannot be used to test high resolution images.
说实话,这可能是GhostMix的最后一个模型,一方面3060ti去做模型真的效率太低了…最近两个星期基本没有空闲时间,除了做模型,就是测模型。另外一方面,我暂时觉得这个模型近乎极限,能提升的空间确实不高了,希望大家喜欢。如果大家喜欢模型,希望大家能post自己的作品到Civitai,这次测试的很多Prompts就是从你们post里面来的,这对我测试模型真的很关键,如果能5星评价就更好。如果愿意支持我的工作,请点击:https://ko-fi.com/ghostshell。 我的目标就希望能换一块4070,更高效的去做模型,3060ti真的测模型效率太低了,而且基本没法测更高分辨率的图片. <br><br>
.jpeg)

,(colorful_1.3),(masterpiece_1.2),%20best%20quality,%20masterpiece,%20original,%20extremely%20detailed%20wallpaper,%20looking%20at.jpeg)
,extremely%20high%20detailed,%20intricate,%208k,%20HDR,%20wallpaper,%20cinematic%20lighting,%20%20,large%20sword,%20%20glow.jpeg) |
Ibrahim-Alam/finetuning-bert-base-uncased-on-tweet_sentiment_binary | Ibrahim-Alam | 2023-06-29T04:13:12Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-29T04:06:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-bert-base-uncased-on-tweet_sentiment_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-bert-base-uncased-on-tweet_sentiment_binary
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2384
- Accuracy: 0.9326
- F1: 0.9355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
QuangHuy54/roberta-small-base-squad | QuangHuy54 | 2023-06-29T04:12:13Z | 130 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-29T02:21:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-small-base-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-small-base-squad
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1483 | 1.0 | 4928 | 1.0453 |
| 0.9092 | 2.0 | 9856 | 0.9868 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
DataHammer/scidpr-ctx-encoder | DataHammer | 2023-06-29T03:59:55Z | 94 | 0 | transformers | [
"transformers",
"pytorch",
"dpr",
"sentence-similarity",
"en",
"dataset:allenai/qasper",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-06-29T02:58:43Z | ---
license: apache-2.0
datasets:
- allenai/qasper
language:
- en
library_name: transformers
pipeline_tag: sentence-similarity
---
# SciDPR Context Encoder
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. scidpr-ctx-encoder is the Context Encoder trained using the Scientific Question Answer (QA) dataset (Pradeep et al., 2021).
- **Developed by:** See [GitHub repo](https://github.com/gmftbyGMFTBY/science-llm) for model developers
- **Model type:** BERT-based encoder
- **Language(s) (NLP):** [Apache 2.0](https://github.com/gmftbyGMFTBY/science-llm/blob/main/LICENSE)
- **License:** English
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [Github Repo](https://github.com/gmftbyGMFTBY/science-llm)
- **Paper [optional]:** [Paper Repo]() |
DataHammer/scidpr-question-encoder | DataHammer | 2023-06-29T03:59:35Z | 94 | 0 | transformers | [
"transformers",
"pytorch",
"dpr",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:allenai/qasper",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-06-29T02:58:25Z | ---
datasets:
- allenai/qasper
language:
- en
library_name: transformers
pipeline_tag: sentence-similarity
license: apache-2.0
---
# SciDPR Question Encoder
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. scidpr-question-encoder is the Question Encoder trained using the Scientific Question Answer (QA) dataset (Pradeep et al., 2021).
- **Developed by:** See [GitHub repo](https://github.com/gmftbyGMFTBY/science-llm) for model developers
- **Model type:** BERT-based encoder
- **Language(s) (NLP):** [Apache 2.0](https://github.com/gmftbyGMFTBY/science-llm/blob/main/LICENSE)
- **License:** English
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [Github Repo](https://github.com/gmftbyGMFTBY/science-llm)
- **Paper [optional]:** [Paper Repo]()
|
DataHammer/mozi_llama_7b | DataHammer | 2023-06-29T03:58:58Z | 0 | 2 | transformers | [
"transformers",
"question-answering",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:allenai/qasper",
"dataset:DataHammer/paper_ground_dialog",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-28T04:13:01Z | ---
language:
- en
library_name: transformers
pipeline_tag: question-answering
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
- allenai/qasper
- DataHammer/paper_ground_dialog
metrics:
- bleu
- rouge
- f1
- bertscore
---
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Mozi is the first large-scale language model for the scientific paper domain, such as question answering and emotional support. With the help of the large-scale language and evidence retrieval models, SciDPR, Mozi generates concise and accurate responses to users' questions about specific papers and provides emotional support for academic researchers.
- **Developed by:** See [GitHub repo](https://github.com/gmftbyGMFTBY/science-llm) for model developers
- **Model date:** Mozi was trained In May. 2023.
- **Model version:** This is version 1 of the model.
- **Model type:** Mozi is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B parameters.
- **Language(s) (NLP):** [Apache 2.0](https://github.com/gmftbyGMFTBY/science-llm/blob/main/LICENSE)
- **License:** English
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [Github Repo](https://github.com/gmftbyGMFTBY/science-llm)
- **Paper [optional]:** [Paper Repo]() |
YakovElm/Qt_10_BERT_Over_Sampling | YakovElm | 2023-06-29T03:49:33Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-29T03:47:58Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_10_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_10_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0404
- Train Accuracy: 0.9843
- Validation Loss: 0.4106
- Validation Accuracy: 0.9205
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4349 | 0.7829 | 0.3376 | 0.8686 | 0 |
| 0.1023 | 0.9645 | 0.3564 | 0.9238 | 1 |
| 0.0404 | 0.9843 | 0.4106 | 0.9205 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Panchovix/Platypus-30B-SuperHOT-8K-4bit-32g | Panchovix | 2023-06-29T03:41:45Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-29T02:29:48Z | ---
license: other
---
[Platypus-30B](https://huggingface.co/lilloukas/Platypus-30B) merged with kaiokendev's [33b SuperHOT 8k LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test), quantized at 4 bit.
It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model.
I HIGHLY suggest to use exllama, to evade some VRAM issues.
Use (max_seq_len = context):
If max_seq_len = 4096, compress_pos_emb = 2
If max_seq_len = 8192, compress_pos_emb = 4
If you have 2x24 GB VRAM GPUs cards, to not get Out of Memory errors at 8192 context, use:
gpu_split: 9,21 |
Christiansg/finetuning-sentiment_spanish-amazon-group23 | Christiansg | 2023-06-29T03:25:10Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-29T03:11:54Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-sentiment_spanish-amazon-group23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment_spanish-amazon-group23
This model is a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0012
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
beomi/kollama-33b | beomi | 2023-06-29T03:12:02Z | 11 | 8 | transformers | [
"transformers",
"llama",
"text-generation",
"KoLLAMA",
"KoreanGPT",
"ko",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-29T03:10:40Z | ---
license: mit
language:
- ko
- en
metrics:
- perplexity
- accuracy
pipeline_tag: text-generation
tags:
- llama
- KoLLAMA
- KoreanGPT
---
> 🚧 Note: this repo is under construction 🚧
## Todo
✅ - finish
⏳ - currently working on it
- ✅ Train new BBPE Tokenizer
- ✅ Test train code on TPUv4 Pods (with model parallel)
- ✅ Converting test (jax to PyTorch)
- ✅ LM train validation on minimal dataset (1 sentence 1000 step)
- ⏳ Build Data Shuffler (curriculum learning)
- ⏳ Train 7B Model
- ⏳ Train 13B Model
- ⏳ Train 33B Model
- Train 65B Model
# KoLLaMA Model Card
KoLLaMA (33B) trained on Korean/English/Code dataset with LLaMA Architecture via JAX,
with the warm support from [Google TPU Research Cloud program](https://sites.research.google/trc/about/) for providing part of the computation resources.
## Model details
**Researcher developing the model**
Junbum Lee (aka Beomi)
**Model date**
KoLLaMA was trained between 2023.04~
- 33B model was trained on 2023.07~
**Model version**
This is alpha version of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
(This repo contains 33B model!)
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
KoLLAMA: [TBD]
LLAMA: https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
MIT
**Where to send questions or comments about the model**
Questions and comments about KoLLaMA can be sent via the [GitHub repository](https://github.com/beomi/KoLLAMA) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of KoLLaMA is research on Korean Opensource large language models
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
## Evaluation datasets
[TBD]
## Training dataset
[TBD]
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content. |
kevinid/bert-base-multilingual-uncased-finetuned-MeIA-AnalisisDeSentimientos | kevinid | 2023-06-29T03:07:08Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-28T02:42:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-multilingual-uncased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0342
- F1: 0.5746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0295 | 1.0 | 766 | 1.0167 | 0.5416 |
| 0.9326 | 2.0 | 1532 | 1.0108 | 0.5553 |
| 0.7689 | 3.0 | 2298 | 1.0342 | 0.5746 |
| 0.623 | 4.0 | 3064 | 1.1112 | 0.5679 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
t3PbMvBN6SXv/Pixelcopter-PLE-v0 | t3PbMvBN6SXv | 2023-06-29T03:02:50Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T02:54:01Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 86.50 +/- 60.51
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Kakaru/sovits-whisper-pretrain | Kakaru | 2023-06-29T02:31:01Z | 0 | 3 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-06-07T14:21:28Z | ---
license: cc-by-nc-sa-4.0
---
diffusion文件夹内为whisper_ppg encoder下训练到420ksteps的预训练模型
另外两个文件夹为whisper_ppg encoder下的sovits预训练模型,300ksteps为目标步数,264ksteps为loss低点,配置文件中的‘n_speakers’应修改为132
使用数据集vctk+m4singers+opencpop+kiritan |
hw2942/Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-V3 | hw2942 | 2023-06-29T02:15:02Z | 87 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"longformer",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-29T01:43:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-V3
results: []
widget:
- text:A股创业板六年新高;纳指跌落高位,标普又新高,创史上第二大中概IPO和今年美股最大IPO的滴滴首日冲高回落,市值破800亿美元,叮咚买菜次日涨逾60%;美元逾两月新高,金银铜6月大跌,原油半年涨超50%。\n中国6月官方制造业PMI为50.9,价格指数从高位回落。\n央行等六部门:充分发挥信贷等金融子市场合力,增强政策的针对性和可操作性。\n人社部 “十四五” 发展规划要求,基本养老保险参保率达95%,城镇新增就业逾5000万人。\n沪深交所7月19日起下调基金交易经手费收费标准。\n奈雪的茶赴港上市首日破发,收盘大跌14%,市值跌破300亿港元。\n港股上市倒计时,小鹏汽车定价165港元/股。\n格力2020股东会通过员工持股计划等议案,董明珠称接班人不是我说你行就行,是你能行才行。\n美国6月小非农ADP新增就业高于预期,绝对值较5月有所回落。\n美联储逆回购用量史上首次逼近1万亿美元。\n媒体称拜登最早下周颁布新行政令,限制多个行业的寡头垄断。\n亚马逊称FTC新任主席有偏见,寻求其回避反垄断调查。\n散户最爱平台Robinhood遭FINRA创纪录罚款7000万美元,被指坑害百万客户。
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-V3
This model is a fine-tuned version of [IDEA-CCNL/Erlangshen-Longformer-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-Longformer-110M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6427
- Accuracy: 0.6154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 38 | 0.6960 | 0.5 |
| No log | 2.0 | 76 | 0.7015 | 0.5 |
| No log | 3.0 | 114 | 0.8248 | 0.5 |
| No log | 4.0 | 152 | 0.6956 | 0.5 |
| No log | 5.0 | 190 | 0.6886 | 0.5 |
| No log | 6.0 | 228 | 0.7065 | 0.5 |
| No log | 7.0 | 266 | 0.7070 | 0.5 |
| No log | 8.0 | 304 | 0.7395 | 0.5385 |
| No log | 9.0 | 342 | 0.6871 | 0.6538 |
| No log | 10.0 | 380 | 0.6427 | 0.6154 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits