modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 12:28:24
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 12:27:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
silver18723/q-FrozenLake-v1-4x4-noSlippery | silver18723 | 2023-05-23T14:11:35Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T14:11:30Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="silver18723/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Benned/KoboKanaeru | Benned | 2023-05-23T14:11:07Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-23T14:08:49Z | ---
license: creativeml-openrail-m
---
|
hazerbean/finetuning-sentiment-model-3000-samples | hazerbean | 2023-05-23T14:01:36Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T13:11:39Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.88
- name: F1
type: f1
value: 0.8831168831168831
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3044
- Accuracy: 0.88
- F1: 0.8831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
satyamverma/Pre-requisite_Model_2 | satyamverma | 2023-05-23T13:58:55Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T07:25:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Pre-requisite_Model_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Pre-requisite_Model_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7157
- Accuracy: 0.5741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5839 | 1.0 | 648 | 0.6894 | 0.5702 |
| 0.5469 | 2.0 | 1296 | 0.7157 | 0.5741 |
| 0.5156 | 3.0 | 1944 | 0.7157 | 0.5741 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Siliconic/raven-diffusion-v1 | Siliconic | 2023-05-23T13:56:45Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-23T13:43:26Z | ---
inference: true
language:
- en
tags:
- text-to-image
---
# Raven Diffusion v1
Text to Image generator for Raven AI System |
Jasperyyc/uroptest2 | Jasperyyc | 2023-05-23T13:55:24Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-19T02:49:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: uroptest2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uroptest2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1950
- Precision: 0.4434
- Recall: 0.4290
- F1: 0.4361
- Accuracy: 0.9699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0425 | 1.0 | 690 | 0.1567 | 0.3439 | 0.4198 | 0.3780 | 0.9637 |
| 0.0367 | 2.0 | 1380 | 0.1876 | 0.4529 | 0.3565 | 0.3990 | 0.9694 |
| 0.0251 | 3.0 | 2070 | 0.1603 | 0.3693 | 0.4599 | 0.4096 | 0.9662 |
| 0.0213 | 4.0 | 2760 | 0.1659 | 0.3842 | 0.4120 | 0.3976 | 0.9675 |
| 0.0166 | 5.0 | 3450 | 0.1732 | 0.3975 | 0.4429 | 0.4190 | 0.9677 |
| 0.0104 | 6.0 | 4140 | 0.1686 | 0.3871 | 0.4182 | 0.4021 | 0.9683 |
| 0.0105 | 7.0 | 4830 | 0.1809 | 0.4205 | 0.3920 | 0.4058 | 0.9688 |
| 0.0064 | 8.0 | 5520 | 0.1914 | 0.4452 | 0.4074 | 0.4255 | 0.9702 |
| 0.0047 | 9.0 | 6210 | 0.1908 | 0.4310 | 0.4244 | 0.4277 | 0.9696 |
| 0.004 | 10.0 | 6900 | 0.1950 | 0.4434 | 0.4290 | 0.4361 | 0.9699 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
orepin/Reinforce-CartPole-v1 | orepin | 2023-05-23T13:45:59Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T13:45:48Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nolanaatama/nwjnshnnrvc500pchdj | nolanaatama | 2023-05-23T13:45:35Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-23T13:30:01Z | ---
license: creativeml-openrail-m
---
|
csukuangfj/sherpa-ncnn-streaming-zipformer-bilingual-zh-en-2023-02-13 | csukuangfj | 2023-05-23T13:32:57Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-02-13T11:03:53Z | ---
license: apache-2.0
---
# Streaming zipformer for sherpa-ncnn
The torchscript model is from
https://huggingface.co/pfluo/k2fsa-zipformer-chinese-english-mixed
The training code is from
https://github.com/k2-fsa/icefall/tree/master/egs/librispeech/ASR/pruned_transducer_stateless7_streaming
|
izumi-lab/llama-13b-japanese-lora-v0-1ep | izumi-lab | 2023-05-23T13:28:14Z | 0 | 11 | null | [
"llama",
"causal-lm",
"ja",
"dataset:izumi-lab/llm-japanese-dataset",
"arxiv:2305.12720",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2023-05-19T16:01:43Z | ---
license: cc-by-sa-4.0
datasets:
- izumi-lab/llm-japanese-dataset
language:
- ja
tags:
- llama
- causal-lm
---
This repo contains a low-rank adapter for LLaMA-13b
fit on the [llm-japanese-dataset](https://github.com/masanorihirano/llm-japanese-dataset) dataset.
You can test this at https://huggingface.co/spaces/izumi-lab/llama-13b-japanese-lora-v0-1ep
This version of the weights was trained with the following hyperparameters:
- Epochs: 1
- Batch size: 130
- Cutoff length: 256
- Learning rate: 3e-4
- Lora _r_: 4
- Lora target modules: q_proj, v_proj
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
from peft import PeftModel
base_model = "decapoda-research/llama-13b-hf"
# Please note that the special license of decapoda-research/llama-13b-hf is applied.
model = LlamaForCausalLM.from_pretrained(base_model, torch_dtype=torch.float16)
tokenizer = LlamaTokenizer.from_pretrained(base_model)
model = PeftModel.from_pretrained(
model,
"izumi-lab/llama-13b-japanese-lora-v0",
torch_dtype=torch.float16,
)
```
To see more latest information, please go to [llm.msuzuki.me](https://llm.msuzuki.me).
## Details
- Japanese Paper: [https://jxiv.jst.go.jp/index.php/jxiv/preprint/view/383](https://jxiv.jst.go.jp/index.php/jxiv/preprint/view/383)
- English Paper: [https://arxiv.org/abs/2305.12720](https://arxiv.org/abs/2305.12720)
- GitHub: [https://github.com/masanorihirano/llm-japanese-dataset](https://github.com/masanorihirano/llm-japanese-dataset)
- Website: [llm.msuzuki.me](https://llm.msuzuki.me).
Citation:
```
@preprint{Hirano2023-llmj,
title={{llm-japanese-dataset v0: Construction of Japanese Chat Dataset for Large Language Models and its Methodology}},
autor={Masanori HIRANO and Masahiro SUZUKI and Hiroki SAKAJI},
doi={10.48550/arXiv.2305.12720},
archivePrefix={arXiv},
arxivId={2305.12720},
year={2023}
}
```
If you have any inquiries, such as joint research, data provision, various types of support, please email to [email protected] . |
DuckCampus/ArP | DuckCampus | 2023-05-23T12:58:20Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-18T15:00:13Z | ---
license: creativeml-openrail-m
---
|
NerfLongshot/t5-small-finetuned-amazon-en | NerfLongshot | 2023-05-23T12:52:06Z | 62 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-01T08:03:45Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: NerfLongshot/t5-small-finetuned-amazon-en
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# NerfLongshot/t5-small-finetuned-amazon-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8618
- Validation Loss: 2.4792
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 8364, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.1458 | 2.5306 | 0 |
| 2.8618 | 2.4792 | 1 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.12.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ProsusAI/finbert | ProsusAI | 2023-05-23T12:43:35Z | 1,031,040 | 761 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"financial-sentiment-analysis",
"sentiment-analysis",
"en",
"arxiv:1908.10063",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language: "en"
tags:
- financial-sentiment-analysis
- sentiment-analysis
widget:
- text: "Stocks rallied and the British pound gained."
---
FinBERT is a pre-trained NLP model to analyze sentiment of financial text. It is built by further training the BERT language model in the finance domain, using a large financial corpus and thereby fine-tuning it for financial sentiment classification. [Financial PhraseBank](https://www.researchgate.net/publication/251231107_Good_Debt_or_Bad_Debt_Detecting_Semantic_Orientations_in_Economic_Texts) by Malo et al. (2014) is used for fine-tuning. For more details, please see the paper [FinBERT: Financial Sentiment Analysis with Pre-trained Language Models](https://arxiv.org/abs/1908.10063) and our related [blog post](https://medium.com/prosus-ai-tech-blog/finbert-financial-sentiment-analysis-with-bert-b277a3607101) on Medium.
The model will give softmax outputs for three labels: positive, negative or neutral.
---
About Prosus
Prosus is a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities. For more information, please visit www.prosus.com.
Contact information
Please contact Dogu Araci dogu.araci[at]prosus[dot]com and Zulkuf Genc zulkuf.genc[at]prosus[dot]com about any FinBERT related issues and questions.
|
kribby/cats-mobilenet3-imagenet-v2 | kribby | 2023-05-23T12:43:05Z | 4 | 0 | tf-keras | [
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] | image-classification | 2023-05-23T12:35:25Z | ---
pipeline_tag: image-classification
--- |
muhammadravi251001/fine-tuned-DatasetQAS-Squad-ID-with-xlm-roberta-large-with-ITTL-with-freeze-LR-1e-05 | muhammadravi251001 | 2023-05-23T12:35:26Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-07T13:04:11Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fine-tuned-DatasetQAS-Squad-ID-with-xlm-roberta-large-with-ITTL-with-freeze-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-DatasetQAS-Squad-ID-with-xlm-roberta-large-with-ITTL-with-freeze-LR-1e-05
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4039
- Exact Match: 53.6774
- F1: 69.6967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| 1.5208 | 0.5 | 463 | 1.4095 | 50.0294 | 67.1298 |
| 1.3903 | 1.0 | 926 | 1.3159 | 52.1644 | 69.1681 |
| 1.2662 | 1.5 | 1389 | 1.2718 | 53.1058 | 69.4729 |
| 1.1754 | 2.0 | 1852 | 1.2603 | 53.2655 | 69.6756 |
| 1.0681 | 2.5 | 2315 | 1.2586 | 53.6186 | 69.8988 |
| 1.0887 | 3.0 | 2778 | 1.2555 | 53.6690 | 70.2968 |
| 0.9549 | 3.5 | 3241 | 1.3076 | 54.1481 | 70.1900 |
| 0.9549 | 4.0 | 3704 | 1.2922 | 54.0977 | 70.2654 |
| 0.8528 | 4.49 | 4167 | 1.3767 | 53.9212 | 70.6362 |
| 0.8467 | 4.99 | 4630 | 1.3384 | 53.8371 | 69.7755 |
| 0.7709 | 5.49 | 5093 | 1.3847 | 53.7615 | 70.0607 |
| 0.763 | 5.99 | 5556 | 1.4039 | 53.6774 | 69.6967 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
geonp/alpaca-ko-en-translation | geonp | 2023-05-23T12:32:40Z | 0 | 0 | null | [
"region:us"
] | null | 2023-05-23T12:27:22Z | LoRA based on beomi/KoAlpaca-Polyglot. |
rmcpantoja/ald_ForwardTacotron_TTS | rmcpantoja | 2023-05-23T12:31:52Z | 0 | 0 | speechbrain | [
"speechbrain",
"climate",
"text-to-speech",
"es",
"dataset:rmcpantoja/Ald_Mexican_Spanish_speech_dataset",
"license:unlicense",
"region:us"
] | text-to-speech | 2023-05-23T12:28:44Z | ---
license: unlicense
datasets:
- rmcpantoja/Ald_Mexican_Spanish_speech_dataset
language:
- es
library_name: speechbrain
pipeline_tag: text-to-speech
tags:
- climate
--- |
aliakyurek/ppo-PyramidsTraining | aliakyurek | 2023-05-23T12:30:50Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-05-23T12:29:31Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: aliakyurek/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nolanaatama/lyrl | nolanaatama | 2023-05-23T12:28:22Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-23T12:18:36Z | ---
license: creativeml-openrail-m
---
|
Lendalf/a2c-PandaReachDense-v2 | Lendalf | 2023-05-23T12:26:12Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T12:25:31Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.13 +/- 0.30
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
FredS1000/ReinforceCardPoleV1 | FredS1000 | 2023-05-23T12:25:56Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T12:04:35Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: ReinforceCardPoleV1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 428.38 +/- 111.31
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Sejan/bert-finetuned-mrpc | Sejan | 2023-05-23T12:25:07Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T12:20:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-mrpc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
ashutosh2109/bert-finetuned-squad | ashutosh2109 | 2023-05-23T12:23:12Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-22T17:59:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
asenella/reproduce_mvae_mnist_0 | asenella | 2023-05-23T12:16:24Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-23T12:16:19Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
sadra-barikbin/CartPole-v1-Reinforce | sadra-barikbin | 2023-05-23T12:10:53Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T12:10:42Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1-Reinforce
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 476.50 +/- 67.88
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ConvLab/sumbt-dst-multiwoz21 | ConvLab | 2023-05-23T11:55:24Z | 35 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"classification",
"dialog state tracking",
"conversational system",
"task-oriented dialog",
"en",
"dataset:ConvLab/multiwoz21",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2023-05-23T11:26:02Z | ---
language:
- en
license: apache-2.0
tags:
- roberta
- classification
- dialog state tracking
- conversational system
- task-oriented dialog
datasets:
- ConvLab/multiwoz21
metrics:
- Joint Goal Accuracy
- Slot F1
model-index:
- name: setsumbt-dst-multiwoz21
results:
- task:
type: classification
name: dialog state tracking
dataset:
type: ConvLab/multiwoz21
name: MultiWOZ21
split: test
metrics:
- type: Joint Goal Accuracy
value: 50.3
name: JGA
- type: Slot F1
value: 90.8
name: Slot F1
---
# SUMBT-dst-multiwoz21
This model is a fine-tuned version [SUMBT](https://github.com/ConvLab/ConvLab-3/tree/master/convlab/dst/setsumbt) of [roberta-base](https://huggingface.co/roberta-base) on [MultiWOZ2.1](https://huggingface.co/datasets/ConvLab/multiwoz21).
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00001
- train_batch_size: 3
- eval_batch_size: 16
- seed: 0
- gradient_accumulation_steps: 1
- optimizer: AdamW
- lr_scheduler_type: linear
- num_epochs: 50.0
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.0+cu110
- Datasets 2.3.2
- Tokenizers 0.12.1
|
aliakyurek/ppo-SnowballTarget | aliakyurek | 2023-05-23T11:49:03Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-05-23T11:48:57Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: aliakyurek/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
rakgesh/image-classifier-one-piece-v03 | rakgesh | 2023-05-23T11:41:43Z | 2 | 0 | tf-keras | [
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] | image-classification | 2023-05-23T11:30:31Z | ---
pipeline_tag: image-classification
--- |
Livin/flan-t5-base-samsum | Livin | 2023-05-23T11:32:42Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-23T09:12:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan-t5-base-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 47.1222
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-samsum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3701
- Rouge1: 47.1222
- Rouge2: 23.3908
- Rougel: 39.7231
- Rougelsum: 43.3842
- Gen Len: 17.1465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4683 | 1.0 | 921 | 1.3897 | 46.737 | 23.2046 | 39.4441 | 43.2001 | 17.1526 |
| 1.3586 | 2.0 | 1842 | 1.3726 | 47.2757 | 23.701 | 39.7059 | 43.502 | 17.2222 |
| 1.3138 | 3.0 | 2763 | 1.3701 | 47.1222 | 23.3908 | 39.7231 | 43.3842 | 17.1465 |
| 1.2828 | 4.0 | 3684 | 1.3737 | 47.3039 | 23.5383 | 39.8402 | 43.5561 | 17.3309 |
| 1.2492 | 5.0 | 4605 | 1.3738 | 47.557 | 23.7814 | 40.1904 | 43.89 | 17.2332 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DaniloTrotta/TestDeleV2 | DaniloTrotta | 2023-05-23T11:29:42Z | 6 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-22T14:51:01Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: >-
This model is open access and available to all, with a CreativeML OpenRAIL-M
license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or
harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use
them and are accountable for their use which must not go against the
provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as
a service. If you do, please be aware you have to include the same use
restrictions as the ones in the license and share a copy of the CreativeML
OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here:
https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
pipeline_tag: text-to-image
---
# DELIBERATE
#### All in One / Any Case Version
This model provides you the ability to create anything you want.</br>
The more power of prompt knowledges you have, the better results you'll get.</br>
It basically means that you'll never get a perfect result with just a few words.</br>
You have to fill out your prompt line extremely detailed.

#### Who find this model perfect:
- NSFW masters
- Meticulous anatomy artists
- Creative prompters
- Art designers
Dive into the perfect creations world with [my prompts](https://civitai.com/models/4823/deliberate "my prompts").</br>
Your research will be appreciated, so feel free to show everyone, what you can get with this model
---
license: bigscience-openrail-m
--- |
shrria/bts-asr-processor | shrria | 2023-05-23T11:28:55Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-05-23T09:45:10Z | ---
language:
- th
library_name: transformers
pipeline_tag: automatic-speech-recognition
--- |
ArturR01/segformer-b0-scene-parse-150 | ArturR01 | 2023-05-23T11:28:22Z | 32 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-05-23T10:32:28Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4530
- Mean Iou: 0.0308
- Mean Accuracy: 0.0934
- Overall Accuracy: 0.3126
- Per Category Iou: [0.368405958754407, 0.11499370080653983, 0.5753658515502771, 0.2138805564642673, 0.28958703459911295, 0.191305743989082, 0.003497854077253219, 0.1288281531360376, 0.12360856380177596, 0.0, 0.0, 0.0, 0.003947940713975041, 0.0, 0.0, 0.015025862437481299, nan, 0.0, 0.0, 0.0037038152308109247, 4.4974139869574995e-05, nan, 0.12424162490108151, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.24922118380062305, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan]
- Per Category Accuracy: [0.7566432234358786, 0.24871206280227098, 0.7073059287949548, 0.34911440750830386, 0.992694013910948, 0.2160975230593844, 0.0035416689300031504, 0.5627543803943077, 0.22603353810393492, 0.0, 0.0, nan, 0.17717717717717718, nan, 0.0, 0.017564022485946285, nan, 0.0, 0.0, 0.004741894444658622, 0.004261363636363636, nan, 0.19470855725506409, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.3939161833898676, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.6085 | 1.0 | 20 | 4.4530 | 0.0308 | 0.0934 | 0.3126 | [0.368405958754407, 0.11499370080653983, 0.5753658515502771, 0.2138805564642673, 0.28958703459911295, 0.191305743989082, 0.003497854077253219, 0.1288281531360376, 0.12360856380177596, 0.0, 0.0, 0.0, 0.003947940713975041, 0.0, 0.0, 0.015025862437481299, nan, 0.0, 0.0, 0.0037038152308109247, 4.4974139869574995e-05, nan, 0.12424162490108151, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.24922118380062305, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan] | [0.7566432234358786, 0.24871206280227098, 0.7073059287949548, 0.34911440750830386, 0.992694013910948, 0.2160975230593844, 0.0035416689300031504, 0.5627543803943077, 0.22603353810393492, 0.0, 0.0, nan, 0.17717717717717718, nan, 0.0, 0.017564022485946285, nan, 0.0, 0.0, 0.004741894444658622, 0.004261363636363636, nan, 0.19470855725506409, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.3939161833898676, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan] |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rakgesh/image-classifier-one-piece-v01 | rakgesh | 2023-05-23T11:26:39Z | 0 | 0 | null | [
"image-classification",
"region:us"
] | image-classification | 2023-05-16T21:37:08Z | ---
pipeline_tag: image-classification
--- |
DataVare/datavare-pst-to-eml-converter | DataVare | 2023-05-23T11:23:42Z | 0 | 0 | null | [
"region:us"
] | null | 2023-05-23T11:23:11Z | Install DataVare PST to EML Converter to your computer to convert Outlook PST to EML. With no data loss, users may easily and rapidly convert PST to EML format. All emails, calendar entries, tasks, events, deleted items, notes, and other data can be converted with this tool. With exact formatting and file structure, this advanced tool can rapidly and effectively convert PST files to EML format. For bulk PST to EML conversion, it supports a variety of email applications, including Apple Mail, Thunderbird, Entourage, etc. There is no file size restriction when transferring PST data to EML. Get a free trial of this program, which can convert a small number of PST files to EML file formats. This tool, which can be used by both technical as well as non-technical persons, can convert PST files to EML files. Download the software's free trial version.
Read More :- https://www.datavare.com/software/pst-to-eml-converter-expert.html |
AI4PD/ProtGPT2 | AI4PD | 2023-05-23T11:22:05Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-05-23T11:21:00Z | ---
license: apache-2.0
---
See the model at https://huggingface.co/nferruz/ProtGPT2 |
MathGpn/pretrained-bert-math2 | MathGpn | 2023-05-23T11:20:05Z | 46 | 0 | transformers | [
"transformers",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | null | 2023-05-23T11:19:33Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: pretrained-bert-math2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pretrained-bert-math2
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 7.8738
- Validation Loss: 8.1023
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-04, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.0822 | 8.1287 | 0 |
| 7.9097 | 8.1196 | 1 |
| 7.8738 | 8.1023 | 2 |
### Framework versions
- Transformers 4.30.0.dev0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
timjwhite/TimsPPOLander | timjwhite | 2023-05-23T11:11:14Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T11:10:55Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.33 +/- 21.49
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ZavGeorge/SD_1.4_simpson_pokemon_tune_v1 | ZavGeorge | 2023-05-23T10:48:01Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:ZavGeorge/SD_1.4_simpson_tune_v1",
"base_model:adapter:ZavGeorge/SD_1.4_simpson_tune_v1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-23T07:38:04Z |
---
license: creativeml-openrail-m
base_model: ZavGeorge/SD_1.4_simpson_tune_v1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - ZavGeorge/SD_1.4_simpson_pokemon_tune_v1
These are adaption weights for ZavGeorge/SD_1.4_simpson_tune_v1. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset.
|
atrytone/scibert_claim_id_2e-05 | atrytone | 2023-05-23T10:44:58Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T10:04:04Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: scibert_claim_id_2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_claim_id_2e-05
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0162
- Accuracy: 0.9962
- F1: 0.9880
- Precision: 0.9889
- Recall: 0.9870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.3131 | 1.0 | 666 | 0.2551 | 0.8880 | 0.5518 | 0.7419 | 0.4392 |
| 0.267 | 2.0 | 1332 | 0.1821 | 0.9280 | 0.7636 | 0.7875 | 0.7410 |
| 0.2245 | 3.0 | 1998 | 0.0942 | 0.9695 | 0.9034 | 0.8968 | 0.9101 |
| 0.1135 | 4.0 | 2664 | 0.0514 | 0.9845 | 0.9517 | 0.9339 | 0.9702 |
| 0.0821 | 5.0 | 3330 | 0.0223 | 0.9944 | 0.9822 | 0.9808 | 0.9837 |
| 0.0618 | 6.0 | 3996 | 0.0162 | 0.9962 | 0.9880 | 0.9889 | 0.9870 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
chenyanjin/chinese_gpt2_big_50000 | chenyanjin | 2023-05-23T10:42:25Z | 136 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-23T09:14:22Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: chinese_gpt2_big_50000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese_gpt2_big_50000
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sarahcnj/codeparrot-ds | sarahcnj | 2023-05-23T10:40:33Z | 141 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-19T11:49:16Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aliakyurek/CartPole-v1 | aliakyurek | 2023-05-23T10:21:56Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-22T12:12:48Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
njuptpz/distilgpt2-finetuned-wikitext2 | njuptpz | 2023-05-23T10:09:02Z | 210 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-23T09:57:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7595 | 1.0 | 2334 | 3.6649 |
| 3.6541 | 2.0 | 4668 | 3.6466 |
| 3.6022 | 3.0 | 7002 | 3.6417 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
deepgoyal19/lora1 | deepgoyal19 | 2023-05-23T09:56:13Z | 0 | 0 | null | [
"text-to-image",
"region:us"
] | text-to-image | 2023-05-23T09:55:58Z | ---
pipeline_tag: text-to-image
--- |
UchihaMadara/Thesis-SentimentAnalysis-1 | UchihaMadara | 2023-05-23T09:39:45Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T02:06:05Z |
# Pretrained checkpoint: roberta-large
# Traning hyperparameters:
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- prompt_format: sentence aspect - sentiment
# Training results
|Epoch | Train loss| Subtask 3 f1 | Subtask 3 precision | Subtask 3 recall | Subtask4 accuracy |
|:----:|:---------:|:------------:|:-------------------:|:----------------:|:-----------------:|
|1|302.38164756447077|0.8747412008281573|0.9316427783902976|0.824390243902439|0.5219512195121951|
|2|152.67940049804747|0.8930041152263374|0.9445048966267682|0.8468292682926829|0.8614634146341463|
|3|99.03914468642324|0.9071318624935865|0.9567099567099567|0.8624390243902439|0.8721951219512195|
|4|60.156904806615785|0.905241935483871|0.9363920750782064|0.8760975609756098|0.8790243902439024|
|5|36.06248981086537|0.9195855944745931|0.9301397205588823|0.9092682926829269|0.8926829268292683|
|
darrel999/distilbert-base-uncased_emotion_ft_0523 | darrel999 | 2023-05-23T09:30:38Z | 93 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T09:11:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
- precision
model-index:
- name: distilbert-base-uncased_emotion_ft_0523
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.917
- name: F1
type: f1
value: 0.9167815299071149
- name: Precision
type: precision
value: 0.8882036697297124
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_emotion_ft_0523
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2694
- Accuracy: 0.917
- F1: 0.9168
- Precision: 0.8882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|
| No log | 1.0 | 63 | 0.9564 | 0.641 | 0.5522 | 0.5005 |
| No log | 2.0 | 126 | 0.4544 | 0.8635 | 0.8507 | 0.8714 |
| No log | 3.0 | 189 | 0.2987 | 0.91 | 0.9093 | 0.8805 |
| 0.67 | 4.0 | 252 | 0.2694 | 0.917 | 0.9168 | 0.8882 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Mike00vito/ner | Mike00vito | 2023-05-23T09:22:05Z | 87 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-23T08:03:18Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
satyamverma/distilbert-base-uncased-finetuned-mrpc | satyamverma | 2023-05-23T09:05:28Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T06:19:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8480392156862745
- name: F1
type: f1
value: 0.8945578231292517
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mrpc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4304
- Accuracy: 0.8480
- F1: 0.8946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 230 | 0.3851 | 0.8137 | 0.8652 |
| No log | 2.0 | 460 | 0.3614 | 0.8456 | 0.8948 |
| 0.4318 | 3.0 | 690 | 0.4304 | 0.8480 | 0.8946 |
| 0.4318 | 4.0 | 920 | 0.5555 | 0.8407 | 0.8900 |
| 0.1697 | 5.0 | 1150 | 0.5883 | 0.8456 | 0.8927 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AZZLI/Magic-10 | AZZLI | 2023-05-23T08:44:25Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-23T08:42:30Z | ---
license: creativeml-openrail-m
---
|
YakovElm/test2 | YakovElm | 2023-05-23T08:38:43Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T08:37:33Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: test2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# test2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Enkhbold/mongolian-gpt2-ner | Enkhbold | 2023-05-23T08:38:24Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"generated_from_trainer",
"mn",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-23T07:27:32Z | ---
language:
- mn
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mongolian-gpt2-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mongolian-gpt2-ner
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2599
- Precision: 0.1483
- Recall: 0.2561
- F1: 0.1878
- Accuracy: 0.9149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4822 | 1.0 | 477 | 0.3452 | 0.1156 | 0.2072 | 0.1484 | 0.8876 |
| 0.3376 | 2.0 | 954 | 0.3196 | 0.1369 | 0.2304 | 0.1717 | 0.8975 |
| 0.3084 | 3.0 | 1431 | 0.2915 | 0.1242 | 0.2257 | 0.1603 | 0.9015 |
| 0.2889 | 4.0 | 1908 | 0.2800 | 0.1328 | 0.2375 | 0.1704 | 0.9063 |
| 0.275 | 5.0 | 2385 | 0.2734 | 0.1439 | 0.2452 | 0.1814 | 0.9099 |
| 0.264 | 6.0 | 2862 | 0.2691 | 0.1426 | 0.2420 | 0.1795 | 0.9115 |
| 0.256 | 7.0 | 3339 | 0.2639 | 0.1411 | 0.2442 | 0.1789 | 0.9129 |
| 0.2498 | 8.0 | 3816 | 0.2628 | 0.1482 | 0.2511 | 0.1864 | 0.9135 |
| 0.2438 | 9.0 | 4293 | 0.2603 | 0.1483 | 0.2548 | 0.1875 | 0.9143 |
| 0.2388 | 10.0 | 4770 | 0.2599 | 0.1483 | 0.2561 | 0.1878 | 0.9149 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kasunw/PPO-from-scratch-LunarLander-v2 | kasunw | 2023-05-23T08:28:12Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-22T10:36:54Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 37.75 +/- 95.96
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 256
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'kasunw/PPO-from-scratch-LunarLander-v2'
'batch_size': 1024
'minibatch_size': 256}
```
|
ManopeDavid/my_awesome_qa_model | ManopeDavid | 2023-05-23T08:27:29Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-23T08:15:54Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ManopeDavid/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ManopeDavid/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6571
- Validation Loss: 1.8993
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.4977 | 2.2290 | 0 |
| 1.9157 | 1.8993 | 1 |
| 1.6571 | 1.8993 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
abhishek2153/pb_models | abhishek2153 | 2023-05-23T08:26:48Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-19T10:41:57Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - abhishek2153/pb_models
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Xoyo/ppo-Pyramids | Xoyo | 2023-05-23T08:19:33Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-05-23T08:19:27Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: Xoyo/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mfaiq2307/faiq-wav2vec2-large-xlsr-indo-demo-colab | mfaiq2307 | 2023-05-23T08:16:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-05-12T07:40:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: faiq-wav2vec2-large-xlsr-indo-demo-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 0.4313296733972271
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# faiq-wav2vec2-large-xlsr-indo-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4079
- Wer: 0.4313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0752 | 2.92 | 400 | 2.7911 | 1.0 |
| 1.2625 | 5.84 | 800 | 0.4611 | 0.6152 |
| 0.3806 | 8.76 | 1200 | 0.4284 | 0.5476 |
| 0.2653 | 11.68 | 1600 | 0.4074 | 0.4935 |
| 0.2134 | 14.6 | 2000 | 0.3846 | 0.4788 |
| 0.1701 | 17.52 | 2400 | 0.4175 | 0.4640 |
| 0.1544 | 20.44 | 2800 | 0.4101 | 0.4471 |
| 0.1303 | 23.36 | 3200 | 0.4147 | 0.4457 |
| 0.1202 | 26.28 | 3600 | 0.4050 | 0.4344 |
| 0.1082 | 29.2 | 4000 | 0.4079 | 0.4313 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.6.1
- Tokenizers 0.13.3
|
alism98/whisper-small-persian | alism98 | 2023-05-23T08:13:54Z | 82 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"fa",
"en",
"dataset:mozilla-foundation/common_voice_13_0",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-05-22T16:58:05Z | ---
license: creativeml-openrail-m
datasets:
- mozilla-foundation/common_voice_13_0
language:
- fa
- en
metrics:
- wer
- accuracy
pipeline_tag: automatic-speech-recognition
--- |
maxingenio/platzi-vit-model-massimo | maxingenio | 2023-05-23T08:10:56Z | 193 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:pokemon-classification",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-05-23T07:48:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pokemon-classification
metrics:
- accuracy
model-index:
- name: platzi-vit-model-massimo
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: pokemon-classification
type: pokemon-classification
config: full
split: validation
args: full
metrics:
- name: Accuracy
type: accuracy
value: 0.08201438848920864
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-massimo
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the pokemon-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 7.8941
- Accuracy: 0.0820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.9383 | 0.82 | 500 | 6.3834 | 0.0360 |
| 0.3399 | 1.64 | 1000 | 7.1051 | 0.0755 |
| 0.0749 | 2.46 | 1500 | 7.6120 | 0.0885 |
| 0.0332 | 3.28 | 2000 | 7.8941 | 0.0820 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rifkiaputri/mt5-base-id-finetune-unans-qg | rifkiaputri | 2023-05-23T07:48:31Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question-generation",
"id",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-11T05:13:59Z | ---
language: id
tags:
- mt5
- question-generation
license: mit
---
# mt5-base for Indonesian Unanswerable Question Generation (cased)
[mT5-base](https://huggingface.co/google/mt5-base) model fine-tuned on machine-translated SQuAD 2.0 dataset for generating unanswerable questions in Indonesian. Please refer to [this paper](https://aclanthology.org/2022.emnlp-main.465/) for more details on the model.
## Citation Info
```bibtex
@inproceedings{putri-oh-2022-idk,
title = "{IDK}-{MRC}: Unanswerable Questions for {I}ndonesian Machine Reading Comprehension",
author = "Putri, Rifki Afina and
Oh, Alice",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.465",
pages = "6918--6933",
}
``` |
xzuyn/OpenLLaMa-200BT-Preview-7B-GGML | xzuyn | 2023-05-23T07:21:18Z | 0 | 1 | null | [
"llama",
"region:us"
] | null | 2023-05-23T07:09:05Z | ---
tags:
- llama
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/openlm-research/open_llama_7b_preview_200bt |
csukuangfj/sherpa-onnx-conformer-zh-2023-05-23 | csukuangfj | 2023-05-23T07:16:01Z | 0 | 0 | null | [
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2023-05-23T04:01:47Z | ---
license: apache-2.0
---
# Introduction
Models from this repo are converted from
https://huggingface.co/luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless5_offline
which is trained using
https://github.com/k2-fsa/icefall/pull/447
|
xzuyn/StableLM-OpenAssistant-SFT-V7-Epoch-3-7B-GGML | xzuyn | 2023-05-23T07:08:36Z | 0 | 0 | null | [
"gpt_neox",
"sft",
"region:us"
] | null | 2023-05-23T06:55:25Z | ---
tags:
- gpt_neox
- sft
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/OpenAssistant/stablelm-7b-sft-v7-epoch-3 |
Wulichao/ppo-LunarLander-v2 | Wulichao | 2023-05-23T07:01:38Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T07:01:16Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.75 +/- 47.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
xzuyn/OpenLLaMa-300BT-Preview-7B-GGML | xzuyn | 2023-05-23T06:34:30Z | 0 | 0 | null | [
"llama",
"region:us"
] | null | 2023-05-23T06:15:58Z | ---
tags:
- llama
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/openlm-research/open_llama_7b_preview_300bt |
Yhyu13/oasst-rlhf-2-llama-30b-7k-steps-gptq-4bit | Yhyu13 | 2023-05-23T06:19:06Z | 6 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-23T05:59:21Z | ---
license: apache-2.0
---
GPTQ 4-bit no actor version for compatibility that works in textgen-webui
Generated by using scripts from https://gitee.com/yhyu13/llama_-tools
Merged weights: https://huggingface.co/Yhyu13/oasst-rlhf-2-llama-30b-7k-steps-hf
Converted LLaMA weights: https://huggingface.co/Yhyu13/llama-30B-hf-openassitant
Delta weights: https://huggingface.co/OpenAssistant/oasst-rlhf-2-llama-30b-7k-steps-xor
---
OA has done a great jobs in RLHF their pre-trained weights. I must say it is tuned to spit out CoT step by step thinking without you actively prompting it to do so,
which is a feature that we observe on ChatGPT and GPT-4.
But note, it still fails at logical paradox tasks such as era of time and bird shot. But none of the LLaMA based models or any available models other than GPT-4 and Claude+ can correct answer paradox questions anyway. So OA rlhf is expected to fail at these tasks, but I do like the RLHF-ed tone which make OA's response sounds professional and proficient.


 |
leonhe/q-FrozenLake-v1-4x4-noSlippery | leonhe | 2023-05-23T06:10:22Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T06:10:19Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="leonhe/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
xzuyn/RedPajama-INCITE-Chat-v1-3B-GGML | xzuyn | 2023-05-23T06:07:09Z | 0 | 0 | null | [
"gpt_neox",
"region:us"
] | null | 2023-05-23T06:04:49Z | ---
tags:
- gpt_neox
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1 |
satyamverma/Pre-requisite_Model | satyamverma | 2023-05-23T06:06:59Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-22T18:39:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Pre-requisite_Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Pre-requisite_Model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7370
- eval_accuracy: 0.6655
- eval_runtime: 6.6968
- eval_samples_per_second: 387.05
- eval_steps_per_second: 24.191
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Maciel/T5Corrector-base-v2 | Maciel | 2023-05-23T05:56:48Z | 145 | 14 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"text error correction",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-12T08:59:14Z | ---
language:
- zh
license: apache-2.0
tags:
- t5
- text error correction
widget:
- text: "今天天气不太好,我的心情也不是很偷快"
example_title: "案例1"
- text: "能不能帮我买点淇淋,好久没吃了。"
example_title: "案例2"
- text: "脑子有点胡涂了,这道题冥冥学过还没有做出来"
example_title: "案例3"
inference:
parameters:
max_length: 256
num_beams: 10
no_repeat_ngram_size: 5
do_sample: True
early_stopping: True
---
## 功能介绍
T5Corrector:中文字音与字形纠错模型
这个模型是基于mengzi-t5-base进行文本纠错训练,使用2kw+句子,通过替换同音词、近音词和形近字来,对于句中词组随机添加词组、删除词组中的部分字,以及字词乱序操作构造纠错平行语料,共计2亿+句对,累计训练66000步。
<a href='https://github.com/Macielyoung/T5Corrector'>Github项目地址</a>
加载模型:
```python
# 加载模型
from transformers import AutoTokenizer, T5ForConditionalGeneration
pretrained = "Maciel/T5Corrector-base-v2"
tokenizer = AutoTokenizer.from_pretrained(pretrained)
model = T5ForConditionalGeneration.from_pretrained(pretrained)
```
使用模型进行预测推理方法:
```python
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
def correct(text, max_length):
model_inputs = tokenizer(text,
max_length=max_length,
truncation=True,
return_tensors="pt").to(device)
output = model.generate(**model_inputs,
num_beams=5,
no_repeat_ngram_size=4,
do_sample=True,
early_stopping=True,
max_length=max_length,
return_dict_in_generate=True,
output_scores=True)
pred_output = tokenizer.batch_decode(output.sequences, skip_special_tokens=True)[0]
return pred_output
text = "贵州毛台现在多少钱一瓶啊,想买两瓶尝尝味道。"
correction = correct(text, max_length=32)
print(correction)
```
### 案例展示
```
示例1:
input: 能不能帮我买点淇淋,好久没吃了。
output: 能不能帮我买点冰淇淋,好久没吃了。
示例2:
input: 脑子有点胡涂了,这道题冥冥学过还没有做出来
output: 脑子有点糊涂了,这道题明明学过还没有做出来
示例3:
input: 今天天气不太好,我的心情也不是很偷快
output: 今天天气不太好,我的心情也不是很愉快
``` |
IGustavsen/bart-base-finetuned-english-wikilingua_epoch-1-1e-4 | IGustavsen | 2023-05-23T05:55:27Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-23T00:54:20Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IGustavsen/bart-base-finetuned-english-wikilingua_epoch-1-1e-4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IGustavsen/bart-base-finetuned-english-wikilingua_epoch-1-1e-4
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.6603
- Validation Loss: 2.4052
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.6603 | 2.4052 | 0 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
MayIBorn/ft-sd15-instance | MayIBorn | 2023-05-23T05:52:57Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-05-23T05:36:42Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: an identification photo of iom man
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - MayIBorn/ft-sd15-instance
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on an identification photo of iom man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
Priyanhsu/BestTextClassifier | Priyanhsu | 2023-05-23T05:23:34Z | 0 | 0 | null | [
"text-classification",
"region:us"
] | text-classification | 2023-05-22T13:12:36Z | ---
pipeline_tag: text-classification
--- |
lgfunderburk/bloomz_marketing_email | lgfunderburk | 2023-05-23T05:18:14Z | 0 | 0 | null | [
"region:us"
] | null | 2023-05-23T05:09:20Z | # Load adapters from the Hub
You can also directly load adapters from the Hub using the commands below:
```
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = f"{HUGGING_FACE_USER_NAME}/{model_name}"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=False, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
```
# Inference
You can then directly use the trained model or the model that you have loaded from the 🤗 Hub for inference as you would do it usually in transformers.
```
from IPython.display import display, Markdown
def make_inference(product, description):
batch = tokenizer(f"### INSTRUCTION\nBelow is a product and description,\
please write a marketing email for this product.\
\n\n### Product:\n{product}\n### Description:\n{description}\n\n### Marketing Email:\n",
return_tensors='pt')
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=200)
display(Markdown((tokenizer.decode(output_tokens[0], skip_special_tokens=True))))
# Example
your_product_name_here = "Campfortable chair"
your_product_description_here = "A lightweight camping chair known for its comfort"
make_inference(your_product_name_here, your_product_description_here)
```
Executing the code above then yields the following
INSTRUCTION
Below is a product and description, please write a marketing email for this product.
Product:
Campfortable chair
Description:
A lightweight camping chair known for its comfort
Marketing Email:
Subject: 🏖️🌞 Get Relaxed incampfortable! ✨
Hey there, Thirsty Traveler! 😎
Imagine being able to lounge in your camping chair all day long, wave goodbye to friends, and return to camp with a refreshed, energy-filled mind and body? 🌴
That's what you’ll get with our revolutionary Campfortable Chair! 🚀
🌱 Say Goodbye to Fears of Inflatable Chairs Our revolutionary design eliminates the worries of bulky, heavy chairs. With just a few simple touches, you’ll feel like you are cradling the world in your arms! 💫
🌺 Flip through Days with Campfortable Chair When you bring Campfortable Chair with you, you’ll have the power to adjust its comfort level based on the demands of your day. Say goodbye to sore backs and headaches, and welcome to relaxed, full-body fun |
MayIBorn/ft-sd15-class-instance2 | MayIBorn | 2023-05-23T05:13:33Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-05-23T04:53:12Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: an identification photo of iom man
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - MayIBorn/ft-sd15-class-instance2
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on an identification photo of iom man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
SergeyKazulin/Reinforce-CartPole-v1 | SergeyKazulin | 2023-05-23T05:10:04Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T05:09:23Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
xzuyn/GPT-NeoX-Erebus-20B-GGML | xzuyn | 2023-05-23T04:47:17Z | 0 | 1 | null | [
"gpt_neox",
"region:us"
] | null | 2023-05-23T04:09:23Z | ---
tags:
- gpt_neox
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/KoboldAI/GPT-NeoX-20B-Erebus |
MayIBorn/ft-sd15-class-instance | MayIBorn | 2023-05-23T04:45:29Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-05-23T04:36:04Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: an identification photo of iom person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - MayIBorn/ft-sd15-class-instance
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on an identification photo of iom person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
mirfan899/da_spacy_sentiment | mirfan899 | 2023-05-23T04:26:11Z | 8 | 0 | spacy | [
"spacy",
"text-classification",
"da",
"region:us"
] | text-classification | 2023-04-17T12:06:13Z | ---
tags:
- spacy
- text-classification
language:
- da
model-index:
- name: da_spacy_sentiment
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `da_spacy_sentiment` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.1,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `textcat` |
| **Components** | `tok2vec`, `textcat` |
| **Vectors** | 500000 keys, 20000 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (3 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat`** | `neutral`, `negative`, `positive` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 82.58 |
| `CATS_MICRO_P` | 82.40 |
| `CATS_MICRO_R` | 82.40 |
| `CATS_MICRO_F` | 82.40 |
| `CATS_MACRO_P` | 81.24 |
| `CATS_MACRO_R` | 84.43 |
| `CATS_MACRO_F` | 82.58 |
| `CATS_MACRO_AUC` | 92.45 |
| `TOK2VEC_LOSS` | 39608.07 |
| `TEXTCAT_LOSS` | 913.24 | |
Smoden/newest_Alice_mix_wizard_mix_Chronicles_diff_lora_v3 | Smoden | 2023-05-23T04:22:52Z | 4 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-05-22T12:42:37Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Smoden/newest_Alice_mix_wizard_mix_Chronicles_diff_lora_v3
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.
|
gensym/ppo-Huggy | gensym | 2023-05-23T04:14:10Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-05-16T03:15:17Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: gensym/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Shad0ws/MiniGPT-4 | Shad0ws | 2023-05-23T04:13:27Z | 0 | 0 | null | [
"region:us"
] | null | 2023-05-23T04:12:22Z | # MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models
[Deyao Zhu](https://tsutikgiau.github.io/)* (On Job Market!), [Jun Chen](https://junchen14.github.io/)* (On Job Market!), [Xiaoqian Shen](https://xiaoqian-shen.github.io), [Xiang Li](https://xiangli.ac.cn), and [Mohamed Elhoseiny](https://www.mohamed-elhoseiny.com/). *Equal Contribution
**King Abdullah University of Science and Technology**
## Online Demo
Click the image to chat with MiniGPT-4 around your images
[](https://minigpt-4.github.io)
## Examples
| | |
:-------------------------:|:-------------------------:
 | 
 | 
More examples can be found in the [project page](https://minigpt-4.github.io).
## Introduction
- MiniGPT-4 aligns a frozen visual encoder from BLIP-2 with a frozen LLM, Vicuna, using just one projection layer.
- We train MiniGPT-4 with two stages. The first traditional pretraining stage is trained using roughly 5 million aligned image-text pairs in 10 hours using 4 A100s. After the first stage, Vicuna is able to understand the image. But the generation ability of Vicuna is heavilly impacted.
- To address this issue and improve usability, we propose a novel way to create high-quality image-text pairs by the model itself and ChatGPT together. Based on this, we then create a small (3500 pairs in total) yet high-quality dataset.
- The second finetuning stage is trained on this dataset in a conversation template to significantly improve its generation reliability and overall usability. To our surprise, this stage is computationally efficient and takes only around 7 minutes with a single A100.
- MiniGPT-4 yields many emerging vision-language capabilities similar to those demonstrated in GPT-4.

## Getting Started
### Installation
**1. Prepare the code and the environment**
Git clone our repository, creating a python environment and ativate it via the following command
```bash
git clone https://github.com/Vision-CAIR/MiniGPT-4.git
cd MiniGPT-4
conda env create -f environment.yml
conda activate minigpt4
```
**2. Prepare the pretrained Vicuna weights**
The current version of MiniGPT-4 is built on the v0 versoin of Vicuna-13B.
Please refer to our instruction [here](PrepareVicuna.md)
to prepare the Vicuna weights.
The final weights would be in a single folder with the following structure:
```
vicuna_weights
├── config.json
├── generation_config.json
├── pytorch_model.bin.index.json
├── pytorch_model-00001-of-00003.bin
...
```
Then, set the path to the vicuna weight in the model config file
[here](minigpt4/configs/models/minigpt4.yaml#L16) at Line 16.
**3. Prepare the pretrained MiniGPT-4 checkpoint**
To play with our pretrained model, download the pretrained checkpoint
[here](https://drive.google.com/file/d/1a4zLvaiDBr-36pasffmgpvH5P7CKmpze/view?usp=share_link).
Then, set the path to the pretrained checkpoint in the evaluation config file
in [eval_configs/minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml#L10) at Line 11.
### Launching Demo Locally
Try out our demo [demo.py](demo.py) on your local machine by running
```
python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0
```
Here, we load Vicuna as 8 bit by default to save some GPU memory usage.
Besides, the default beam search width is 1.
Under this setting, the demo cost about 23G GPU memory.
If you have a more powerful GPU with larger GPU memory, you can run the model
in 16 bit by setting low_resource to False in the config file
[minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml) and use a larger beam search width.
### Training
The training of MiniGPT-4 contains two alignment stages.
**1. First pretraining stage**
In the first pretrained stage, the model is trained using image-text pairs from Laion and CC datasets
to align the vision and language model. To download and prepare the datasets, please check
our [first stage dataset preparation instruction](dataset/README_1_STAGE.md).
After the first stage, the visual features are mapped and can be understood by the language
model.
To launch the first stage training, run the following command. In our experiments, we use 4 A100.
You can change the save path in the config file
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage1_pretrain.yaml)
```bash
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage1_pretrain.yaml
```
A MiniGPT-4 checkpoint with only stage one training can be downloaded
[here](https://drive.google.com/file/d/1u9FRRBB3VovP1HxCAlpD9Lw4t4P6-Yq8/view?usp=share_link).
Compared to the model after stage two, this checkpoint generate incomplete and repeated sentences frequently.
**2. Second finetuning stage**
In the second stage, we use a small high quality image-text pair dataset created by ourselves
and convert it to a conversation format to further align MiniGPT-4.
To download and prepare our second stage dataset, please check our
[second stage dataset preparation instruction](dataset/README_2_STAGE.md).
To launch the second stage alignment,
first specify the path to the checkpoint file trained in stage 1 in
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage2_finetune.yaml).
You can also specify the output path there.
Then, run the following command. In our experiments, we use 1 A100.
```bash
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage2_finetune.yaml
```
After the second stage alignment, MiniGPT-4 is able to talk about the image coherently and user-friendly.
## Acknowledgement
+ [BLIP2](https://huggingface.co/docs/transformers/main/model_doc/blip-2) The model architecture of MiniGPT-4 follows BLIP-2. Don't forget to check this great open-source work if you don't know it before!
+ [Lavis](https://github.com/salesforce/LAVIS) This repository is built upon Lavis!
+ [Vicuna](https://github.com/lm-sys/FastChat) The fantastic language ability of Vicuna with only 13B parameters is just amazing. And it is open-source!
If you're using MiniGPT-4 in your research or applications, please cite using this BibTeX:
```bibtex
@misc{zhu2022minigpt4,
title={MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models},
author={Deyao Zhu and Jun Chen and Xiaoqian Shen and xiang Li and Mohamed Elhoseiny},
year={2023},
}
```
## License
This repository is under [BSD 3-Clause License](LICENSE.md).
Many codes are based on [Lavis](https://github.com/salesforce/LAVIS) with
BSD 3-Clause License [here](LICENSE_Lavis.md).
|
Yhyu13/llama-30B-hf-openassitant | Yhyu13 | 2023-05-23T04:10:16Z | 1,523 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-22T11:54:33Z | ---
license: apache-2.0
---
This is the hf tr version of llama 30B converted specifically as open assistant's 30B model required:
https://huggingface.co/OpenAssistant/oasst-rlhf-2-llama-30b-7k-steps-xor
This the md5 checksum that I get locally, which matchs the original repo suggests
```
fdb311c39b8659a5d5c1991339bafc09 ./tokenizer.json
edd1a5897748864768b1fab645b31491 ./tokenizer_config.json
6b2e0a735969660e720c27061ef3f3d3 ./special_tokens_map.json
3eddc6fc02c0172d38727e5826181adb ./pytorch_model-00004-of-00007.bin
fecfda4fba7bfd911e187a85db5fa2ef ./pytorch_model.bin.index.json
462a2d07f65776f27c0facfa2affb9f9 ./pytorch_model-00007-of-00007.bin
598538f18fed1877b41f77de034c0c8a ./config.json
99762d59efa6b96599e863893cf2da02 ./pytorch_model-00006-of-00007.bin
aee09e21813368c49baaece120125ae3 ./generation_config.json
92754d6c6f291819ffc3dfcaf470f541 ./pytorch_model-00005-of-00007.bin
5cfcb78b908ffa02e681cce69dbe4303 ./pytorch_model-00002-of-00007.bin
e1dc8c48a65279fb1fbccff14562e6a3 ./pytorch_model-00003-of-00007.bin
9cffb1aeba11b16da84b56abb773d099 ./pytorch_model-00001-of-00007.bin
eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model
```
|
xzuyn/GPT-2-IMDb-124M-GGML | xzuyn | 2023-05-23T04:05:10Z | 0 | 1 | null | [
"gpt2",
"gpt-2",
"region:us"
] | null | 2023-05-23T04:04:22Z | ---
tags:
- gpt2
- gpt-2
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/lvwerra/gpt2-imdb |
xzuyn/DistilGPT-2-Rap-82M-GGML | xzuyn | 2023-05-23T04:03:39Z | 0 | 1 | null | [
"gpt2",
"gpt-2",
"region:us"
] | null | 2023-05-23T04:01:33Z | ---
tags:
- gpt2
- gpt-2
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/dzionek/distilgpt2-rap |
xzuyn/Cerebras-GPT-2-Alpaca-SP-2.7B-GGML | xzuyn | 2023-05-23T03:58:46Z | 0 | 0 | null | [
"gpt2",
"gpt-2",
"region:us"
] | null | 2023-05-23T03:56:02Z | ---
tags:
- gpt2
- gpt-2
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/lxe/Cerebras-GPT-2.7B-Alpaca-SP |
xzuyn/StableLM-Base-Alpha-3B-GGML | xzuyn | 2023-05-23T03:56:54Z | 0 | 0 | null | [
"gpt_neox",
"region:us"
] | null | 2023-05-23T03:51:13Z | ---
tags:
- gpt_neox
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/stabilityai/stablelm-base-alpha-3b |
xzuyn/GPT-J-Skein-6B-GGML | xzuyn | 2023-05-23T03:48:59Z | 0 | 0 | null | [
"gptj",
"gpt-j",
"region:us"
] | null | 2023-05-23T03:33:07Z | ---
tags:
- gptj
- gpt-j
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/KoboldAI/GPT-J-6B-Skein |
xzuyn/GPT-J-Shinen-6B-GGML | xzuyn | 2023-05-23T03:32:30Z | 0 | 4 | null | [
"gptj",
"gpt-j",
"region:us"
] | null | 2023-05-23T03:11:35Z | ---
tags:
- gptj
- gpt-j
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/KoboldAI/GPT-J-6B-Shinen |
SHENMU007/speechcommand-demo | SHENMU007 | 2023-05-23T03:30:04Z | 157 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-05-23T02:41:32Z | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: speechcommand-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speechcommand-demo
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0873
- Accuracy: 0.9809
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6433 | 1.0 | 399 | 0.4979 | 0.9112 |
| 0.2406 | 2.0 | 798 | 0.1455 | 0.9750 |
| 0.1563 | 3.0 | 1197 | 0.1032 | 0.9785 |
| 0.1144 | 4.0 | 1597 | 0.0919 | 0.9806 |
| 0.1254 | 5.0 | 1995 | 0.0873 | 0.9809 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
4bit/pyg-7b | 4bit | 2023-05-23T03:07:43Z | 15 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text generation",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-23T02:50:34Z | ---
language:
- en
thumbnail: null
tags:
- text generation
- conversational
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Pygmalion 7B</h1>
<h2 style="text-align: center">A conversational LLaMA fine-tune.</h2>
## Model Details
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/pygmalion-7b
Pygmalion 7B is a dialogue model based on Meta's LLaMA-7B.
This is version 1. It has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4, for those of you familiar with the project.
## Prompting
The model was trained on the usual Pygmalion persona + chat format, so any of the usual UIs should already handle everything correctly. If you're using the model directly, this is the expected formatting:
```
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
<START>
[DIALOGUE HISTORY]
You: [User's input message here]
[CHARACTER]:
```
Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is a sliding window of chat history so the model can have conversational context to draw from. Here's a concrete example:
```
Assistant's Persona: Assistant is a highly intelligent language model trained to comply with user requests.
<START>
Assistant: Hello! How may I help you today?
You: What is Zork?
Assistant:
```
Which will generate something like:
```
Zork is an interactive fiction computer game created in the 1970s by Infocom, Inc., which was later acquired by Activision Blizzard. It is widely considered one of the most influential games ever made and has been credited with popularizing text-based adventure games. The original version of Zork was written in the programming language MACRO-10, but it was ported to many other platforms over the years."
```
The model will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
## Limitations and biases
The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading. |
xzuyn/GPT-2-124M-GGML | xzuyn | 2023-05-23T02:50:34Z | 0 | 0 | null | [
"gpt2",
"gpt-2",
"region:us"
] | null | 2023-05-23T02:47:02Z | ---
tags:
- gpt2
- gpt-2
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/gpt2 |
kenkliesner/transformer_1_model | kenkliesner | 2023-05-23T02:49:47Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T01:57:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: transformer_1_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9296
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# transformer_1_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2347
- Accuracy: 0.9296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2312 | 1.0 | 1563 | 0.1932 | 0.9261 |
| 0.1515 | 2.0 | 3126 | 0.2347 | 0.9296 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yy-zm/00 | yy-zm | 2023-05-23T02:49:37Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-23T02:49:37Z | ---
license: creativeml-openrail-m
---
|
mirfan899/da_ner | mirfan899 | 2023-05-23T02:49:18Z | 0 | 0 | spacy | [
"spacy",
"token-classification",
"da",
"model-index",
"region:us"
] | token-classification | 2023-03-28T02:30:30Z | ---
tags:
- spacy
- token-classification
language:
- da
model-index:
- name: da_ner
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9453630482
- name: NER Recall
type: recall
value: 0.9094052559
- name: NER F Score
type: f_score
value: 0.927035601
---
| Feature | Description |
| --- | --- |
| **Name** | `da_ner` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.1,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 500000 keys, 20000 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (36 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `ADVERTISING`, `AMOUNTS_OF_THE_PRODUCT`, `AVAILABILITY`, `BRANDING`, `CUSTOMERS`, `DISCOUNTS_AND_OFFERS`, `DOCUMENTATION`, `EMPLOYEES`, `EXTERNAL_SUPPLIER`, `FACILITIES`, `FINANCING`, `HANDLING_OF_SERVICE`, `LEASING`, `LEGAL`, `LOCATIONS`, `LOCATION_IN_THE_STORE`, `LOGISTICS`, `MARKETING`, `MARKET_COVERAGE`, `MEDIA`, `MESSAGES`, `ORGANIZATIONAL_STRUCTURE`, `PAYMENT_TERMS`, `PR`, `PRICE`, `PRICE_STRATEGIES`, `PRODUCT_PROPERTIES`, `PRODUCT_TYPE`, `PRODUCT_WARRANTY`, `REFERENCES`, `RETURN_ON_INVESTMENT`, `SALES_PROCESS`, `SHOWROOM`, `THE_MANAGEMENT`, `UNIFORMITY_IN_DELIVERIES`, `USE_OF_THE_PRODUCT` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 92.70 |
| `ENTS_P` | 94.54 |
| `ENTS_R` | 90.94 |
| `TOK2VEC_LOSS` | 50522.21 |
| `NER_LOSS` | 55212.43 | |
nolanaatama/ysbrvc1000pchkjv | nolanaatama | 2023-05-23T02:47:23Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-23T02:40:03Z | ---
license: creativeml-openrail-m
---
|
xzuyn/CodeGPT-Small-Py-117M-GGML | xzuyn | 2023-05-23T02:45:30Z | 0 | 0 | null | [
"gpt2",
"gpt-2",
"region:us"
] | null | 2023-05-23T02:43:00Z | ---
tags:
- gpt2
- gpt-2
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/microsoft/CodeGPT-small-py |
redax123/valcroanime | redax123 | 2023-05-23T02:43:31Z | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-23T02:37:53Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### valcroanime Dreambooth model trained by redax123 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
xzuyn/RWKV-4-Raven-3B-v11-Eng99-Other1-20230425-ctx4096-GGML | xzuyn | 2023-05-23T02:39:17Z | 0 | 1 | null | [
"rwkv",
"region:us"
] | null | 2023-05-23T02:31:26Z | ---
tags:
- rwkv
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/BlinkDL/rwkv-4-raven |
Xoyo/Reinforce-Pixelcopter-PLE-v0 | Xoyo | 2023-05-23T02:36:01Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T02:35:18Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 13.10 +/- 11.52
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
xzuyn/RWKV-4-Raven-7B-v11x-Eng99-Other1-20230429-ctx8192-GGML | xzuyn | 2023-05-23T02:34:53Z | 0 | 4 | null | [
"rwkv",
"region:us"
] | null | 2023-05-23T02:24:02Z | ---
tags:
- rwkv
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/BlinkDL/rwkv-4-raven |
Subsets and Splits