modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-01 00:49:44
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 461
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-01 00:49:44
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
varunlpai/t5-base-cbs | varunlpai | 2023-04-06T09:12:57Z | 90 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-30T22:38:12Z | Author: Varun Pai
Website: https://www.varunlpai.com/ |
romainf/distilbert-base-uncased-imdb-3000 | romainf | 2023-04-06T09:11:17Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-06T08:40:03Z | This model is the 3000th step checkpoint of distilbert-base-uncased fine-tuned on imdb dataset with the following training arguments :
```
training_args = TrainingArguments(
output_dir="bert_results_imdb",
learning_rate=1e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
weight_decay=0.01,
warmup_ratio = 0.06,
max_steps = 5000,
optim = 'adamw_torch',
save_strategy = 'steps',
evaluation_strategy='steps',
load_best_model_at_end=True
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_imdb["train"],
eval_dataset=tokenized_imdb["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
``` |
romainf/distilbert-base-uncased-imdb-2000 | romainf | 2023-04-06T09:11:02Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-06T08:35:59Z | This model is the 2000th step checkpoint of distilbert-base-uncased fine-tuned on imdb dataset with the following training arguments :
```
training_args = TrainingArguments(
output_dir="bert_results_imdb",
learning_rate=1e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
weight_decay=0.01,
warmup_ratio = 0.06,
max_steps = 5000,
optim = 'adamw_torch',
save_strategy = 'steps',
evaluation_strategy='steps',
load_best_model_at_end=True
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_imdb["train"],
eval_dataset=tokenized_imdb["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
``` |
romainf/distilbert-base-uncased-imdb-5000 | romainf | 2023-04-06T09:10:04Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-06T08:42:23Z | This model is the 5000th step checkpoint of distilbert-base-uncased fine-tuned on imdb dataset with the following training arguments :
```
training_args = TrainingArguments(
output_dir="bert_results_imdb",
learning_rate=1e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
weight_decay=0.01,
warmup_ratio = 0.06,
max_steps = 5000,
optim = 'adamw_torch',
save_strategy = 'steps',
evaluation_strategy='steps',
load_best_model_at_end=True
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_imdb["train"],
eval_dataset=tokenized_imdb["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
``` |
babatu99/edelinbabatu | babatu99 | 2023-04-06T08:53:14Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2023-04-06T08:52:57Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ConvLab/setsumbt-dst-multiwoz21 | ConvLab | 2023-04-06T08:51:53Z | 32 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"classification",
"dialog state tracking",
"conversational system",
"task-oriented dialog",
"en",
"dataset:ConvLab/multiwoz21",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-11-30T10:58:37Z | ---
language:
- en
license: apache-2.0
tags:
- roberta
- classification
- dialog state tracking
- conversational system
- task-oriented dialog
datasets:
- ConvLab/multiwoz21
metrics:
- Joint Goal Accuracy
- Slot F1
model-index:
- name: setsumbt-dst-multiwoz21
results:
- task:
type: classification
name: dialog state tracking
dataset:
type: ConvLab/multiwoz21
name: MultiWOZ21
split: test
metrics:
- type: Joint Goal Accuracy
value: 50.3
name: JGA
- type: Slot F1
value: 90.8
name: Slot F1
---
# SetSUMBT-dst-multiwoz21
This model is a fine-tuned version [SetSUMBT](https://github.com/ConvLab/ConvLab-3/tree/master/convlab/dst/setsumbt) of [roberta-base](https://huggingface.co/roberta-base) on [MultiWOZ2.1](https://huggingface.co/datasets/ConvLab/multiwoz21).
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00001
- train_batch_size: 3
- eval_batch_size: 16
- seed: 0
- gradient_accumulation_steps: 1
- optimizer: AdamW
- lr_scheduler_type: linear
- num_epochs: 50.0
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.0+cu110
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ConvLab/setsumbt-dst_nlu-multiwoz21-EnD2 | ConvLab | 2023-04-06T08:51:20Z | 91 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"classification",
"dialog state tracking",
"natural language understanding",
"uncertainty",
"conversational system",
"task-oriented dialog",
"en",
"dataset:ConvLab/multiwoz21",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-11-30T11:00:04Z | ---
language:
- en
license: apache-2.0
tags:
- roberta
- classification
- dialog state tracking
- natural language understanding
- uncertainty
- conversational system
- task-oriented dialog
datasets:
- ConvLab/multiwoz21
metrics:
- Joint Goal Accuracy
- Slot F1
- Joint Goal Expected Calibration Error
model-index:
- name: setsumbt-dst-nlu-multiwoz21
results:
- task:
type: classification
name: dialog state tracking
dataset:
type: ConvLab/multiwoz21
name: MultiWOZ21
split: test
metrics:
- type: Joint Goal Accuracy
value: 51.8
name: JGA
- type: Slot F1
value: 91.1
name: Slot F1
- type: Joint Goal Expected Calibration Error
value: 12.7
name: JECE
---
# SetSUMBT-dst-nlu-multiwoz21
This model is a fine-tuned version [SetSUMBT](https://github.com/ConvLab/ConvLab-3/tree/master/convlab/dst/setsumbt) of [roberta-base](https://huggingface.co/roberta-base) on [MultiWOZ2.1](https://huggingface.co/datasets/ConvLab/multiwoz21).
This model is a combined DST and NLU model and is a distribution distilled version of a ensemble of 5 models. This model should be used to produce uncertainty estimates for the dialogue belief state.
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00001
- train_batch_size: 3
- eval_batch_size: 16
- seed: 0
- gradient_accumulation_steps: 1
- optimizer: AdamW
- loss: Ensemble Distribution Distillation Loss
- lr_scheduler_type: linear
- num_epochs: 50.0
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.0+cu110
- Datasets 2.3.2
- Tokenizers 0.12.1
|
adhisetiawan/poca-SoccerTwos | adhisetiawan | 2023-04-06T08:43:03Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-04-06T08:42:55Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: adhisetiawan/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DevBeom/stable-diffusion-class2 | DevBeom | 2023-04-06T08:38:43Z | 39 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-04-06T08:35:13Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Stable_Diffusion_Class2 Dreambooth model trained by DevBeom with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
JWP/marian-finetuned-kde4-en-to-fr | JWP | 2023-04-06T08:29:01Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-04-06T07:53:48Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa | vocabtrimmer | 2023-04-06T08:04:59Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"question answering",
"ja",
"dataset:lmqg/qg_jaquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-04-06T08:00:12Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: ja
datasets:
- lmqg/qg_jaquad
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: "question: 新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?, context: 三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。"
example_title: "Question Answering Example 1"
- text: "question: 1968年に開催されたオリンピックの名前は何ですか?, context: オリンピックが世界的大イベントに成長するに従って政治に左右されるようになると、1968年のメキシコシティ大会では黒人差別を訴える場と化し、1972年のミュンヘン大会ではアラブのゲリラによるイスラエル選手に対するテロ事件まで起きた(ミュンヘンオリンピック事件)。1976年のモントリオール大会になると、ニュージーランドのラグビーチームの南アフリカ遠征に反対してアフリカの諸国22ヶ国がボイコットを行った。そして、1980年のモスクワ大会ではソ連のアフガニスタン侵攻に反発したアメリカ・西ドイツ・日本などの西側諸国が相次いでボイコットを行った。1984年ロサンゼルス大会ではソ連と東側諸国が報復ボイコットを行ない、参加したのはソ連と対立していた中国とルーマニアだけだった。中でも、イラン革命後のイラン・イスラム共和国はモスクワとロサンゼルス双方のオリンピックをボイコットしている。オリンピックが巨大化するに従って財政負担の増大が大きな問題となり、1976年の夏季大会では大幅な赤字を出し、その後夏季・冬季とも立候補都市が1〜2都市だけという状態が続いた。"
example_title: "Question Answering Example 2"
model-index:
- name: vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_jaquad
type: default
args: default
metrics:
- name: BLEU4 (Question Answering)
type: bleu4_question_answering
value: 0.0
- name: ROUGE-L (Question Answering)
type: rouge_l_question_answering
value: 67.22
- name: METEOR (Question Answering)
type: meteor_question_answering
value: 53.01
- name: BERTScore (Question Answering)
type: bertscore_question_answering
value: 90.65
- name: MoverScore (Question Answering)
type: moverscore_question_answering
value: 89.42
- name: AnswerF1Score (Question Answering)
type: answer_f1_score__question_answering
value: 70.55
- name: AnswerExactMatch (Question Answering)
type: answer_exact_match_question_answering
value: 70.55
---
# Model Card of `vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa`
This model is fine-tuned version of [vocabtrimmer/mbart-large-cc25-trimmed-ja](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ja) for question answering task on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [vocabtrimmer/mbart-large-cc25-trimmed-ja](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ja)
- **Language:** ja
- **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ja", model="vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa")
# model prediction
answers = model.answer_q(list_question="新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?", list_context=" 三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa")
output = pipe("question: 新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?, context: 三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_jaquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 70.55 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| AnswerF1Score | 70.55 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| BERTScore | 90.65 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_1 | 67.17 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_2 | 0 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_3 | 0 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_4 | 0 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| METEOR | 53.01 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| MoverScore | 89.42 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| ROUGE_L | 67.22 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_jaquad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: vocabtrimmer/mbart-large-cc25-trimmed-ja
- max_length: 512
- max_length_output: 32
- epoch: 16
- batch: 8
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 16
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
vocabtrimmer/mbart-large-cc25-trimmed-ko-koquad-qg | vocabtrimmer | 2023-04-06T08:04:10Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"question generation",
"ko",
"dataset:lmqg/qg_koquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-04-06T07:52:27Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: ko
datasets:
- lmqg/qg_koquad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "1990년 영화 《 <hl> 남부군 <hl> 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다."
example_title: "Question Generation Example 1"
- text: "백신이 없기때문에 예방책은 <hl> 살충제 <hl> 를 사용하면서 서식 장소(찻찬 받침, 배수로, 고인 물의 열린 저장소, 버려진 타이어 등)의 수를 줄임으로써 매개체를 통제할 수 있다."
example_title: "Question Generation Example 2"
- text: "<hl> 원테이크 촬영 <hl> 이기 때문에 한 사람이 실수를 하면 처음부터 다시 찍어야 하는 상황이 발생한다."
example_title: "Question Generation Example 3"
model-index:
- name: vocabtrimmer/mbart-large-cc25-trimmed-ko-koquad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_koquad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 11.59
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 27.96
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 30.17
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 83.8
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 82.98
---
# Model Card of `vocabtrimmer/mbart-large-cc25-trimmed-ko-koquad-qg`
This model is fine-tuned version of [vocabtrimmer/mbart-large-cc25-trimmed-ko](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ko) for question generation task on the [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [vocabtrimmer/mbart-large-cc25-trimmed-ko](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ko)
- **Language:** ko
- **Training data:** [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ko", model="vocabtrimmer/mbart-large-cc25-trimmed-ko-koquad-qg")
# model prediction
questions = model.generate_q(list_context="1990년 영화 《 남부군 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.", list_answer="남부군")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mbart-large-cc25-trimmed-ko-koquad-qg")
output = pipe("1990년 영화 《 <hl> 남부군 <hl> 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ko-koquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_koquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 83.8 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_1 | 27.01 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_2 | 19.9 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_3 | 15.07 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_4 | 11.59 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| METEOR | 30.17 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| MoverScore | 82.98 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| ROUGE_L | 27.96 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_koquad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: vocabtrimmer/mbart-large-cc25-trimmed-ko
- max_length: 512
- max_length_output: 32
- epoch: 6
- batch: 8
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ko-koquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
dolphinz/exlora | dolphinz | 2023-04-06T07:34:00Z | 0 | 4 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-02-08T14:41:34Z | ---
license: cc-by-nc-4.0
---
MBW for xmdp.WD
xilmo - 1, 0.5, 0.5, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1
dpep - 0, 0.5, 0.5, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0 |
nizar-sayad/opt-350m-finetuned-openbookcorpus | nizar-sayad | 2023-04-06T07:03:31Z | 60 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"opt",
"text-generation",
"generated_from_keras_callback",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-06T06:23:19Z | ---
license: other
tags:
- generated_from_keras_callback
model-index:
- name: nizar-sayad/opt-350m-finetuned-openbookcorpus
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nizar-sayad/opt-350m-finetuned-openbookcorpus
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2144
- Validation Loss: 3.7865
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.2144 | 3.7865 | 0 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
jkhan447/HateXplain-2nd-anno-labeled | jkhan447 | 2023-04-06T07:01:52Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-18T08:23:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: HateXplain-2nd-anno-labeled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HateXplain-2nd-anno-labeled
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5400
- Accuracy: 0.6110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
jkhan447/HateXplain-majority-labeled | jkhan447 | 2023-04-06T07:01:45Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-17T07:19:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: HateXplain-majority-labeled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HateXplain-majority-labeled
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4749
- Accuracy: 0.6708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
intanm/mlm-20230406-002-5 | intanm | 2023-04-06T06:41:33Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-04-06T05:56:23Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: mlm-20230406-002-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlm-20230406-002-5
This model is a fine-tuned version of [intanm/mlm-20230405-002-3](https://huggingface.co/intanm/mlm-20230405-002-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 284 | 2.8540 |
| 3.1889 | 2.0 | 568 | 2.5429 |
| 3.1889 | 3.0 | 852 | 2.4145 |
| 2.5305 | 4.0 | 1136 | 2.2699 |
| 2.5305 | 5.0 | 1420 | 2.1402 |
| 2.2343 | 6.0 | 1704 | 2.1209 |
| 2.2343 | 7.0 | 1988 | 2.0147 |
| 2.0581 | 8.0 | 2272 | 1.9715 |
| 1.942 | 9.0 | 2556 | 1.9564 |
| 1.942 | 10.0 | 2840 | 1.9158 |
| 1.8372 | 11.0 | 3124 | 1.9038 |
| 1.8372 | 12.0 | 3408 | 1.8593 |
| 1.7595 | 13.0 | 3692 | 1.8076 |
| 1.7595 | 14.0 | 3976 | 1.8470 |
| 1.7023 | 15.0 | 4260 | 1.7716 |
| 1.6629 | 16.0 | 4544 | 1.7706 |
| 1.6629 | 17.0 | 4828 | 1.7681 |
| 1.629 | 18.0 | 5112 | 1.7283 |
| 1.629 | 19.0 | 5396 | 1.7673 |
| 1.5984 | 20.0 | 5680 | 1.7507 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
sd-concepts-library/miumiu | sd-concepts-library | 2023-04-06T06:23:26Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2023-04-06T06:23:23Z | ---
license: mit
---
### miumiu on Stable Diffusion
This is the `<miumiu>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
PAIR/text2video-zero-controlnet-canny-arcane | PAIR | 2023-04-06T06:22:12Z | 50 | 29 | diffusers | [
"diffusers",
"text-to-video",
"text-to-image",
"arxiv:2303.13439",
"arxiv:2208.12242",
"arxiv:2302.05543",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-video | 2023-03-25T06:28:09Z | ---
license: creativeml-openrail-m
library_name: diffusers
inference: true
pipeline_tag: text-to-video
tags:
- text-to-video
- text-to-image
---
# Text2Video-Zero Model Card - ControlNet Canny Aracane Style
[Text2Video-Zero](https://arxiv.org/abs/2303.13439) is a zero-shot text to video generator. It can perform `zero-shot text-to-video generation`, `Video Instruct Pix2Pix` (instruction-guided video editing),
`text and pose conditional video generation`, `text and canny-edge conditional video generation`, and
`text, canny-edge and dreambooth conditional video generation`. For more information about this work,
please have a look at our [paper](https://arxiv.org/abs/2303.13439) and our demo: [](https://huggingface.co/spaces/PAIR/Text2Video-Zero)
Our [code](https://github.com/Picsart-AI-Research/Text2Video-Zero) works with any StableDiffusion base model.
This model provides [DreamBooth](https://arxiv.org/abs/2208.12242) weights for the `Arcane style` to be used with edge guidance (using [ControlNet](https://arxiv.org/abs/2302.05543)) in text2video zero.
## Weights for Text2Video-Zero
We converted the original weights into diffusers and made them usable for [ControlNet](https://arxiv.org/abs/2302.05543) with edge guidance using: https://github.com/lllyasviel/ControlNet/discussions/12.
### Model Details
- **Developed by:** Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan and Humphrey Shi
- **Model type:** Dreambooth text-to-image and text-to-video generation model with edge control for text2video zero
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license).
- **Model Description:** This is a model for [text2video zero](https://github.com/Picsart-AI-Research/Text2Video-Zero) with edge guidance and arcane style.
It can be used also with ControlNet in a text-to-image setup with edge guidance.
- **DreamBoth Keyword:** arcane style
- **Resources for more information:** [GitHub](https://github.com/Picsart-AI-Research/Text2Video-Zero), [Paper](https://arxiv.org/abs/2303.13439), [CIVITAI](https://civitai.com/models/23/arcane-diffusion).
- **Cite as:**
@article{text2video-zero,
title={Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators},
author={Khachatryan, Levon and Movsisyan, Andranik and Tadevosyan, Vahram and Henschel, Roberto and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey},
journal={arXiv preprint arXiv:2303.13439},
year={2023}
}
## Original Weights
The Dreambooth weights for the Arcane style were taken from [CIVITAI](https://civitai.com/models/23/arcane-diffusion).
### Model Details
- **Developed by:** Quiet_Joker (Username listed on CIVITAI)
- **Model type:** Dreambooth text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license).
- **Model Description:** This is a model that was created using [DreamBooth](https://arxiv.org/abs/2208.12242) to generate images with Arcane style, based on text prompts.
- **DreamBoth Keyword:** arcane style
- **Resources for more information:** [CIVITAI](https://civitai.com/models/23/arcane-diffusion).
## Biases content acknowledgement:
Beware that Text2Video-Zero may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography, and violence. Text2Video-Zero in this demo is meant only for research purposes.
# Citation
@article{text2video-zero,
title={Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators},
author={Khachatryan, Levon and Movsisyan, Andranik and Tadevosyan, Vahram and Henschel, Roberto and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey},
journal={arXiv preprint arXiv:2303.13439},
year={2023}
} |
PAIR/text2video-zero-controlnet-canny-gta5 | PAIR | 2023-04-06T06:21:37Z | 52 | 14 | diffusers | [
"diffusers",
"text-to-video",
"text-to-image",
"arxiv:2303.13439",
"arxiv:2208.12242",
"arxiv:2302.05543",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-video | 2023-03-24T19:13:48Z | ---
license: creativeml-openrail-m
library_name: diffusers
inference: true
pipeline_tag: text-to-video
tags:
- text-to-video
- text-to-image
---
# Text2Video-Zero Model Card - ControlNet Canny GTA-5 Style
[Text2Video-Zero](https://arxiv.org/abs/2303.13439) is a zero-shot text to video generator. It can perform `zero-shot text-to-video generation`, `Video Instruct Pix2Pix` (instruction-guided video editing),
`text and pose conditional video generation`, `text and canny-edge conditional video generation`, and
`text, canny-edge and dreambooth conditional video generation`. For more information about this work,
please have a look at our [paper](https://arxiv.org/abs/2303.13439) and our demo: [](https://huggingface.co/spaces/PAIR/Text2Video-Zero)
Our [code](https://github.com/Picsart-AI-Research/Text2Video-Zero) works with any StableDiffusion base model.
This model provides [DreamBooth](https://arxiv.org/abs/2208.12242) weights for the `GTA-5 style` to be used with edge guidance (using [ControlNet](https://arxiv.org/abs/2302.05543)) in text2video zero.
## Weights for Text2Video-Zero
We converted the original weights into diffusers and made them usable for [ControlNet](https://arxiv.org/abs/2302.05543) with edge guidance using: https://github.com/lllyasviel/ControlNet/discussions/12.
### Model Details
- **Developed by:** Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan and Humphrey Shi
- **Model type:** Dreambooth text-to-image and text-to-video generation model with edge control for text2video zero
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license).
- **Model Description:** This is a model for [text2video zero](https://github.com/Picsart-AI-Research/Text2Video-Zero) with edge guidance and gta-5 style.
It can be used also with ControlNet in a text-to-image setup with edge guidance.
- **DreamBoth Keyword:** gtav style
- **Resources for more information:** [GitHub](https://github.com/Picsart-AI-Research/Text2Video-Zero), [Paper](https://arxiv.org/abs/2303.13439), [CIVITAI](https://civitai.com/models/1309/gta5-artwork-diffusion).
- **Cite as:**
@article{text2video-zero,
title={Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators},
author={Khachatryan, Levon and Movsisyan, Andranik and Tadevosyan, Vahram and Henschel, Roberto and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey},
journal={arXiv preprint arXiv:2303.13439},
year={2023}
}
## Original Weights
The Dreambooth weights for the GTA-5 style were taken from [CIVITAI](https://civitai.com/models/1309/gta5-artwork-diffusion).
### Model Details
- **Developed by:** Quiet_Joker (Username listed on CIVITAI)
- **Model type:** Dreambooth text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license).
- **Model Description:** This is a model that was created using [DreamBooth](https://arxiv.org/abs/2208.12242) to generate images with GTA-5 style, based on text prompts.
- **DreamBoth Keyword:** gtav style
- **Resources for more information:** [CIVITAI](https://civitai.com/models/1309/gta5-artwork-diffusion).
## Biases content acknowledgement:
Beware that Text2Video-Zero may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography, and violence. Text2Video-Zero in this demo is meant only for research purposes.
# Citation
@article{text2video-zero,
title={Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators},
author={Khachatryan, Levon and Movsisyan, Andranik and Tadevosyan, Vahram and Henschel, Roberto and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey},
journal={arXiv preprint arXiv:2303.13439},
year={2023}
} |
PAIR/text2video-zero-controlnet-canny-anime | PAIR | 2023-04-06T06:21:15Z | 78 | 19 | diffusers | [
"diffusers",
"text-to-video",
"text-to-image",
"arxiv:2303.13439",
"arxiv:2208.12242",
"arxiv:2302.05543",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-video | 2023-03-25T06:27:43Z | ---
license: creativeml-openrail-m
library_name: diffusers
inference: true
pipeline_tag: text-to-video
tags:
- text-to-video
- text-to-image
---
# Text2Video-Zero Model Card - ControlNet Canny Anime Style
[Text2Video-Zero](https://arxiv.org/abs/2303.13439) is a zero-shot text to video generator. It can perform `zero-shot text-to-video generation`, `Video Instruct Pix2Pix` (instruction-guided video editing),
`text and pose conditional video generation`, `text and canny-edge conditional video generation`, and
`text, canny-edge and dreambooth conditional video generation`. For more information about this work,
please have a look at our [paper](https://arxiv.org/abs/2303.13439) and our demo: [](https://huggingface.co/spaces/PAIR/Text2Video-Zero)
Our [code](https://github.com/Picsart-AI-Research/Text2Video-Zero) works with any StableDiffusion base model.
This model provides [DreamBooth](https://arxiv.org/abs/2208.12242) weights for the `Anime style` to be used with edge guidance (using [ControlNet](https://arxiv.org/abs/2302.05543)) in text2video zero.
## Weights for Text2Video-Zero
We converted the original weights into diffusers and made them usable for [ControlNet](https://arxiv.org/abs/2302.05543) with edge guidance using: https://github.com/lllyasviel/ControlNet/discussions/12.
### Model Details
- **Developed by:** Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan and Humphrey Shi
- **Model type:** Dreambooth text-to-image and text-to-video generation model with edge control for text2video zero
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license).
- **Model Description:** This is a model for [text2video zero](https://github.com/Picsart-AI-Research/Text2Video-Zero) with edge guidance and anime style.
It can be used also with ControlNet in a text-to-image setup with edge guidance.
- **DreamBoth Keyword:** anime style
- **Resources for more information:** [GitHub](https://github.com/Picsart-AI-Research/Text2Video-Zero), [Paper](https://arxiv.org/abs/2303.13439), [CIVITAI](https://civitai.com/models/8740/superanime-viper).
- **Cite as:**
@article{text2video-zero,
title={Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators},
author={Khachatryan, Levon and Movsisyan, Andranik and Tadevosyan, Vahram and Henschel, Roberto and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey},
journal={arXiv preprint arXiv:2303.13439},
year={2023}
}
## Original Weights
The Dreambooth weights for the Anime style were taken from [CIVITAI](https://civitai.com/models/8740/superanime-viper).
### Model Details
- **Developed by:** Quiet_Joker (Username listed on CIVITAI)
- **Model type:** Dreambooth text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license).
- **Model Description:** This is a model that was created using [DreamBooth](https://arxiv.org/abs/2208.12242) to generate images with Anime style, based on text prompts.
- **DreamBoth Keyword:** anime style
- **Resources for more information:** [CIVITAI](https://civitai.com/models/8740/superanime-viper).
## Biases content acknowledgement:
Beware that Text2Video-Zero may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography, and violence. Text2Video-Zero in this demo is meant only for research purposes.
# Citation
@article{text2video-zero,
title={Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators},
author={Khachatryan, Levon and Movsisyan, Andranik and Tadevosyan, Vahram and Henschel, Roberto and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey},
journal={arXiv preprint arXiv:2303.13439},
year={2023}
}
|
heidragon3045/dqn-SpaceInvadersNoFrameskip-v4 | heidragon3045 | 2023-04-06T06:19:11Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-06T06:18:28Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 601.50 +/- 175.04
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga heidragon3045 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga heidragon3045 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga heidragon3045
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
IDEA-CCNL/Randeng-Pegasus-523M-Summary-Chinese-V1 | IDEA-CCNL | 2023-04-06T06:14:02Z | 217 | 5 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"chinese",
"zh",
"arxiv:1912.08777",
"arxiv:2209.02970",
"autotrain_compatible",
"region:us"
] | summarization | 2023-01-13T09:21:54Z | ---
language: zh
tags:
- summarization
- chinese
inference: False
---
# Randeng-Pegasus-523M-Summary-Chinese-V1
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/summary/randeng_pegasus_523M_summary.sh)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/zh/latest/docs/%E7%87%83%E7%81%AF%E7%B3%BB%E5%88%97/Randeng-Pegasus-238M-Summary-Chinese.html)
## 简介 Brief Introduction
善于处理摘要任务,在数个中文摘要数据集上微调后的,中文版的PAGASUS-large。
Good at solving text summarization tasks, after fine-tuning on multiple Chinese text summarization datasets, Chinese PAGASUS-large.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言转换 NLT | 燃灯 Randeng | PEFASUS | 523M | 文本摘要任务-中文 Summary-Chinese |
## 模型信息 Model Information
参考论文:[PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf)
基于[Randeng-Pegasus-523M-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-523M-Chinese),我们在收集的7个中文领域的文本摘要数据集(约4M个样本),使用实体过滤后数据集(约1.8M)重新微调,在不损伤下游指标的情况下提升了摘要对原文的忠实度,得到了summary-v1版本。这7个数据集为:education, new2016zh, nlpcc, shence, sohu, thucnews和weibo。
Based on [Randeng-Pegasus-523M-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-523M-Chinese), we fine-tuned a text summarization version (summary-v1) on a filted dataset(1.8M), which we use entitys to filter these 7 Chinese text summarization datasets, with totaling around 4M samples. We can improve the faithfulness of summaries without damage to the downstream task, eg. Rouge-L on lcsts. The datasets include: education, new2016zh, nlpcc, shence, sohu, thucnews and weibo.
### 下游效果 Performance
| datasets | rouge-1 | rouge-2 | rouge-L |
| ---- | ---- | ---- | ---- |
| LCSTS | 46.94 | 33.92 | 43.51 |
## 使用 Usage
```python
from transformers import PegasusForConditionalGeneration
# Need to download tokenizers_pegasus.py and other Python script from Fengshenbang-LM github repo in advance,
# or you can download tokenizers_pegasus.py and data_utils.py in https://huggingface.co/IDEA-CCNL/Randeng_Pegasus_523M/tree/main
# Strongly recommend you git clone the Fengshenbang-LM repo:
# 1. git clone https://github.com/IDEA-CCNL/Fengshenbang-LM
# 2. cd Fengshenbang-LM/fengshen/examples/pegasus/
# and then you will see the tokenizers_pegasus.py and data_utils.py which are needed by pegasus model
from tokenizers_pegasus import PegasusTokenizer
model = PegasusForConditionalGeneration.from_pretrained("IDEA-CCNL/Randeng-Pegasus-523M-Summary-Chinese-V1")
tokenizer = PegasusTokenizer.from_pretrained("IDEA-CCNL/Randeng-Pegasus-523M-Summary-Chinese-V1")
text = "在北京冬奥会自由式滑雪女子坡面障碍技巧决赛中,中国选手谷爱凌夺得银牌。祝贺谷爱凌!今天上午,自由式滑雪女子坡面障碍技巧决赛举行。决赛分三轮进行,取选手最佳成绩排名决出奖牌。第一跳,中国选手谷爱凌获得69.90分。在12位选手中排名第三。完成动作后,谷爱凌又扮了个鬼脸,甚是可爱。第二轮中,谷爱凌在道具区第三个障碍处失误,落地时摔倒。获得16.98分。网友:摔倒了也没关系,继续加油!在第二跳失误摔倒的情况下,谷爱凌顶住压力,第三跳稳稳发挥,流畅落地!获得86.23分!此轮比赛,共12位选手参赛,谷爱凌第10位出场。网友:看比赛时我比谷爱凌紧张,加油!"
inputs = tokenizer(text, max_length=1024, return_tensors="pt")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"])
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# model Output: 自由式滑雪女子坡面障碍技巧决赛谷爱凌摘银
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
chansung/alpaca-lora-30b | chansung | 2023-04-06T06:07:13Z | 0 | 50 | null | [
"alpaca",
"llama",
"chat",
"text2text-generation",
"en",
"dataset:yahma/alpaca-cleaned",
"license:gpl-3.0",
"region:us"
] | text2text-generation | 2023-03-19T00:33:16Z | ---
license: gpl-3.0
datasets:
- yahma/alpaca-cleaned
language:
- en
pipeline_tag: text2text-generation
tags:
- alpaca
- llama
- chat
---
This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system.
- Dataset: [cleaned-up Alpaca dataset](https://github.com/gururise/AlpacaDataCleaned) up to 04/06/23
- Training script: borrowed from the official [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) implementation
- Training script:
```shell
python finetune.py \
--base_model='decapoda-research/llama-30b-hf' \
--num_epochs=10 \
--cutoff_len=512 \
--group_by_length \
--output_dir='./lora-alpaca' \
--lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
--lora_r=16 \
--batch_size=... \
--micro_batch_size=...
``` |
kog50000/zzov | kog50000 | 2023-04-06T06:02:16Z | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-04-06T05:32:59Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### zzov Dreambooth model trained by kog50000 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
alkiskoudounas/a2c-AntBulletEnv-v0 | alkiskoudounas | 2023-04-06T06:00:09Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-06T05:59:05Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 898.79 +/- 55.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
trendfollower/distilbert-base-uncased-finetuned-emotion | trendfollower | 2023-04-06T06:00:09Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-06T02:32:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.93
- name: F1
type: f1
value: 0.9300768549546928
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1662
- Accuracy: 0.93
- F1: 0.9301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 63 | 0.2997 | 0.91 | 0.9095 |
| No log | 2.0 | 126 | 0.2031 | 0.924 | 0.9242 |
| No log | 3.0 | 189 | 0.1826 | 0.9275 | 0.9278 |
| 0.264 | 4.0 | 252 | 0.1668 | 0.93 | 0.9301 |
| 0.264 | 5.0 | 315 | 0.1662 | 0.93 | 0.9301 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1
- Datasets 2.11.0
- Tokenizers 0.13.3
|
jkhan447/HateXplain-All-agreed-labeled | jkhan447 | 2023-04-06T05:59:00Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-18T07:24:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: HateXplain-All-agreed-labeled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HateXplain-All-agreed-labeled
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8955
- Accuracy: 0.8161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
davidliu1110/my_awesome_wnut_model | davidliu1110 | 2023-04-06T05:54:25Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-04-06T05:49:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5142348754448398
- name: Recall
type: recall
value: 0.267840593141798
- name: F1
type: f1
value: 0.352224253503961
- name: Accuracy
type: accuracy
value: 0.9396776537984695
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2823
- Precision: 0.5142
- Recall: 0.2678
- F1: 0.3522
- Accuracy: 0.9397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2951 | 0.3719 | 0.1965 | 0.2571 | 0.9357 |
| No log | 2.0 | 426 | 0.2823 | 0.5142 | 0.2678 | 0.3522 | 0.9397 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
makarios19/my_awesome_billsum_model | makarios19 | 2023-04-06T05:42:56Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-04-06T05:36:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1368
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4108
- Rouge1: 0.1368
- Rouge2: 0.0444
- Rougel: 0.1141
- Rougelsum: 0.1142
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7110 | 0.1233 | 0.0312 | 0.1039 | 0.1038 | 19.0 |
| No log | 2.0 | 124 | 2.4936 | 0.1343 | 0.0452 | 0.114 | 0.114 | 19.0 |
| No log | 3.0 | 186 | 2.4293 | 0.1364 | 0.0452 | 0.1134 | 0.1133 | 19.0 |
| No log | 4.0 | 248 | 2.4108 | 0.1368 | 0.0444 | 0.1141 | 0.1142 | 19.0 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
nlp-tlp/mwo-ner-5 | nlp-tlp | 2023-04-06T05:24:10Z | 2 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:mwo_ner",
"region:us"
] | token-classification | 2023-04-06T05:16:51Z | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- mwo_ner
widget:
- text: "replace seal on pump"
---
## MWO NER Test
A flair-based NER model for MWOs. There are five classes: `Item`, `Activity`, `Observation`, `Specifier`, and `Consumable`.
|
dhnchandan/huggingface | dhnchandan | 2023-04-06T05:13:36Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"chemistry",
"art",
"code",
"text-classification",
"en",
"bn",
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-04-06T05:06:52Z | ---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
- bn
metrics:
- accuracy
- character
- bleu
library_name: adapter-transformers
pipeline_tag: text-classification
tags:
- chemistry
- art
- code
--- |
chckpnt-mrrng/UberRealisticP0rnMerge | chckpnt-mrrng | 2023-04-06T04:55:37Z | 0 | 1 | null | [
"license:openrail",
"region:us"
] | null | 2023-04-06T04:19:42Z | ---
license: openrail
---
JUST MIRRORING FOR COLAB FROM : https://civitai.com/models/2661/uber-realistic-porn-merge-urpm |
lmahecha/data1 | lmahecha | 2023-04-06T04:42:37Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-04-06T04:42:37Z | ---
license: bigscience-openrail-m
---
|
proleetops/LunarLander-v2 | proleetops | 2023-04-06T04:41:44Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-06T04:41:39Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -228.10 +/- 122.55
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'repo_id': 'proleetops/LunarLander-v2'
'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'batch_size': 512
'minibatch_size': 128}
```
|
SaathvikD/dqn-PongNoFrameskip-v4 | SaathvikD | 2023-04-06T04:31:45Z | 0 | 1 | stable-baselines3 | [
"stable-baselines3",
"PongNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-06T04:30:46Z | ---
library_name: stable-baselines3
tags:
- PongNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PongNoFrameskip-v4
type: PongNoFrameskip-v4
metrics:
- type: mean_reward
value: 20.40 +/- 0.80
name: mean_reward
verified: false
---
# **DQN** Agent playing **PongNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **PongNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env PongNoFrameskip-v4 -orga SaathvikD -f logs/
python -m rl_zoo3.enjoy --algo dqn --env PongNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env PongNoFrameskip-v4 -orga SaathvikD -f logs/
python -m rl_zoo3.enjoy --algo dqn --env PongNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env PongNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env PongNoFrameskip-v4 -f logs/ -orga SaathvikD
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.3),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
YuxinJiang/debate-alpaca-lora_7b_001 | YuxinJiang | 2023-04-06T04:16:44Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-04-03T19:45:01Z | ---
license: apache-2.0
---
# Debate-alpaca-lora
An English debate model by instruct-tuning LLaMA on [Kialo](https://www.kialo.com/) data.
We may ask the model to **support** or **oppose** a claim by the desconding order of impact.
**A quick start for inference**: <a href="https://colab.research.google.com/drive/1emuN4LcO4X2-BCoATbhawIZeGBr-d3Y-?usp=sharing" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Some results are shown below:
```
Instruction: Support the following claim in descending order of impact.
Input: The impact of AI on the education will be positive.
Response: 1. AI will be able to help teachers with administrative tasks such as grading papers and keeping track of student progress.
2. AI can help teachers to keep track of students' progress and to make sure that they have understood the lesson.
3. AI will be able to help teachers to customize the lesson to their students' needs.
```
```
Instruction: Oppose the following claim in descending order of impact.
Input: The impact of AI on the education will be positive.
Response: 1. AI will replace teachers and thus reduce the number of jobs in this sector.
```
## Well-trained Model
| Model Name | Training Data and Setting |
| --- | :--- |
| [debate-alpaca-lora_7b_001](https://huggingface.co/YuxinJiang/debate-alpaca-lora_7b_001) | Trained on 21k kialo data, 10 epoch |
| [debate-alpaca-lora_7b_002](https://huggingface.co/YuxinJiang/debate-alpaca-lora_7b_002) | Trained on 52k alpaca data + 21k kialo data, 10 epoch |
## Data Collection
We collected 1,560 discussion threads before January 2020 of Kialo. Each discussion forms an argument tree as the following figure shows. Except the thesis, every claim in the argument tree either opposes or supports its parent claim. Moreover, each claim has impact votes assigned by the users of the platform. The impact votes evaluate how impactful a claim is.

We extract each argument with its supported children claims or opposed children claims to form a training example. The children claims are ranked by the descending order of impact. We delete children claims with votes less than 5 or average impact score less than 2. Finally, we obtain 20,998 [training data](https://github.com/YJiangcm/Debate-alpaca-lora/blob/master/kialo_debate.json) formuated as the following examples:
```
{
"instruction": "Support the following claim in descending order of impact.",
"input": "Pineapple belongs on pizza.",
"output": "1. From a culinary point of view, it's always a good thing to have another recipe to make.\n2. Even if pizza does not remain authentically Italian, that is fine, as many foods have been reinvented in foreign cultures and remained delicious e.g. sushi.\n3. Pineapple is a cost effective topping, adding sweetness to the recipe that few other ingredients could at that price point.\n4. Pineapples make pizza aesthetically pleasing.\n5. It's no coincidence that pineapple and pizza both start with pi.\n6. The taste is unusual, which is a good way of changing a routine.\n7. Pineapple enhances the pizza-indulging experience, making pizza bring a greater joy to and satisfaction from eating than ever before.\n8. Sweet pineapple and salty Canadian bacon or ham brings an [umami](https://www.realsimple.com/food-recipes/recipe-collections-favorites/what-umami) that is acceptable the world over.\n9. The pineapple cleanses the palate from having an overly fatty and savory taste.\n10. Assuming meat is also involved, the idea of combining meat and fruit is in [no way new](http://www.dartagnan.com/meat-and-fruit-recipes-and-combinations.html).\n11. If [pizza wurstel](http://pizzawuerstel.de/uber/) is ok, pineapple on pizza must be ok, too.\n12. It is a way for students to have a main course and desert in one meal, enabling more time for exstensive learning.\n13. Pineapple adds a texture and mouthfeel \\(in between crisp and juicy\\) that is missing from most pizzas. Textural variety makes for a more pleasing meal"
},
{
"instruction": "Oppose the following claim in descending order of impact.",
"input": "Pineapple belongs on pizza.",
"output": "1. Pineapple on pizza is an insult to the Italian origins of pizza.\n2. While pineapple is not an ideal pizza topping, it may be helped by being applied directly to the pizza surface by a qualified professional with the appropriate certifications in moisture removal procedures.\n3. You cannot eat Spongebob's house and be considered a good person at the same time.\n4. The sweetness of the pineapple is a bad fit for the general taste of the pizza.\n5. It's wrong in class terms. Pizza is proletarian food: good, cheap, and filling. Pineapples were a complete luxury when they first came to Europe, being displayed for [social status](https://www.foodbeast.com/news/til-people-used-to-rent-pineapples/) instead of eaten. This caused a divide, as the poor were deprived from eating it on pizza, while knowing the wealthy frivolously wasted it on decor. It's a class war on a plate, and that's exactly what it tastes like.\n6. Pineapple agriculture is [heavily polluting](http://www.ticotimes.net/2011/05/26/costa-rica-s-pineapple-boom-unhealthy-warn-experts), It destroys the lives of people in the tropics. Pizza is a large part of the demand for these pineapples.\n7. Torture is wrong. In today's day and age, we should have moved well beyond this kind of barbarism. It's cruel to a tropical fruit to be stuck on top of a pizza and be shoved into an oven.\n8. According to the [Oxford dictionary](https://en.oxforddictionaries.com/definition/pizza), pizza is \"a dish of Italian origin, consisting of a flat round base of dough baked with a topping of tomatoes and cheese, typically with added meat, fish, or vegetables\". Pineapple is a fruit.\n9. Eating pizza first and pineapple as dessert would make the whole meal experience better than together.\n10. Many people have spoken out publicly against pineapple pizza.\n11. Pineapple agriculture is bad for the environment.\n12. [Hawaiian pizza](https://en.wikipedia.org/wiki/Hawaiian_pizza) is a Canadian invention.\n13. Because of the incredible passion people have against putting pineapples on pizza, we ought not to combine the two, thus ending existing conflict and reducing the chance of future conflict, altogether leading towards world peace"
},
```
## Training
We train our model based on [Alpaca LoRA](https://github.com/tloen/alpaca-lora). It costs about 5 hours on 2 RTX 3090Ti.
```
WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py \
--base_model='decapoda-research/llama-7b-hf' \
--resume_from_checkpoint 'alpaca-lora-7b' \
--num_epochs=10 \
--cutoff_len=256 \
--group_by_length \
--data_path 'kialo_debate.json' \
--output_dir './debate-alpaca-lora_7b_001' \
--lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
--lora_r=16 \
--micro_batch_size=16
```
## Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{debate-alpaca-lora,
author={Yuxin Jiang},
title = {An Instruction-following English debate model, LoRA tuning on LLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/YJiangcm/Debate-alpaca-lora}},
}
```
|
blanchefort/rubert-base-cased-sentiment-rurewiews | blanchefort | 2023-04-06T04:06:52Z | 248 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"text-classification",
"sentiment",
"ru",
"dataset:RuReviews",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- ru
tags:
- sentiment
- text-classification
datasets:
- RuReviews
---
# RuBERT for Sentiment Analysis of Product Reviews
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuReviews](https://github.com/sismetanin/rureviews).
## Labels
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-rurewiews')
model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-rurewiews', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted, dim=1).numpy()
return predicted
```
## Dataset used for model training
**[RuReviews](https://github.com/sismetanin/rureviews)**
> RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian.
|
blanchefort/rubert-base-cased-sentiment-rusentiment | blanchefort | 2023-04-06T04:06:16Z | 1,931 | 12 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"text-classification",
"sentiment",
"ru",
"dataset:RuSentiment",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- ru
tags:
- sentiment
- text-classification
datasets:
- RuSentiment
---
# RuBERT for Sentiment Analysis
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuSentiment](http://text-machine.cs.uml.edu/projects/rusentiment/).
## Labels
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-rusentiment')
model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-rusentiment', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted, dim=1).numpy()
return predicted
```
## Dataset used for model training
**[RuSentiment](http://text-machine.cs.uml.edu/projects/rusentiment/)**
> A. Rogers A. Romanov A. Rumshisky S. Volkova M. Gronas A. Gribov RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian. Proceedings of COLING 2018. |
li1999/clip | li1999 | 2023-04-06T03:45:17Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-04-06T03:45:17Z | ---
license: bigscience-openrail-m
---
|
ricardotalavera/platzi-distilroberta-base-mrpc-glue-ricardo-talavera | ricardotalavera | 2023-04-06T03:44:46Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-06T03:15:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-mrpc-glue-ricardo-talavera
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8627450980392157
- name: F1
type: f1
value: 0.9
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-ricardo-talavera
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6639
- Accuracy: 0.8627
- F1: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.19 | 1.09 | 500 | 0.6639 | 0.8627 | 0.9 |
| 0.1962 | 2.18 | 1000 | 0.6639 | 0.8627 | 0.9 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
GreeneryScenery/SheepsControlV1 | GreeneryScenery | 2023-04-06T03:40:35Z | 4 | 0 | diffusers | [
"diffusers",
"ControlNet",
"art",
"image-to-image",
"dataset:GreeneryScenery/SheepsNet",
"region:us"
] | image-to-image | 2023-04-05T14:09:24Z | ---
datasets:
- GreeneryScenery/SheepsNet
pipeline_tag: image-to-image
tags:
- ControlNet
- art
---
# V1
First try at training a custom [ControlNet](https://github.com/huggingface/diffusers/tree/main/examples/controlnet). (Only 1 epoch 🤗) Using dataset from [here](https://huggingface.co/datasets/GreeneryScenery/SheepsNet).
Follow [this](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet) to use (u sure ya wanna use?).
Things to improve:
- More variety of data in general? (Not only sheeps)
- More data (More sheeps)
- More epochs
- Better text prompts
## Example:
Prompt: Lamb
Conditioning image:
<img src = 'https://huggingface.co/GreeneryScenery/SheepsControl/resolve/main/example_input.png' style = 'width: 256px'>
Image:
<img src = 'https://huggingface.co/GreeneryScenery/SheepsControl/resolve/main/example_output.png' style = 'width: 256px'> |
lorenzoncina/whisper-medium-zh | lorenzoncina | 2023-04-06T03:30:04Z | 38 | 8 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"zh",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-04-05T07:44:01Z | ---
language:
- zh
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Chinese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Chinese
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 zh-CN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3226
- Cer: 10.9782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.3998 | 0.1 | 1000 | 0.2898 | 19.1261 |
| 0.2414 | 1.07 | 2000 | 0.2826 | 12.7761 |
| 0.1197 | 2.04 | 3000 | 0.2952 | 12.4320 |
| 0.2034 | 3.0 | 4000 | 0.2962 | 13.1970 |
| 0.0344 | 3.1 | 5000 | 0.3039 | 11.5122 |
| 0.0226 | 4.07 | 6000 | 0.3083 | 11.3549 |
| 0.0097 | 5.04 | 7000 | 0.3187 | 11.4440 |
| 0.0121 | 6.01 | 8000 | 0.3173 | 11.2258 |
| 0.0015 | 6.11 | 9000 | 0.3219 | 11.1410 |
| 0.0019 | 7.07 | 10000 | 0.3226 | 10.9782 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.1.dev0
- Tokenizers 0.13.2
|
crumb/GeoV-Instruct-LoRA | crumb | 2023-04-06T02:47:23Z | 0 | 1 | null | [
"geov",
"en",
"region:us"
] | null | 2023-04-06T02:30:09Z | ---
language:
- en
tags:
- geov
---
Prompt Format:
```
[instruction]
[optional input]
[response will start after two newlines]
```
```python
!pip install -q bitsandbytes datasets accelerate loralib
!pip install -q git+https://github.com/huggingface/transformers.git@main git+https://github.com/huggingface/peft.git
!pip install -q geov
import torch
from peft import PeftModel, PeftConfig
from geov import GeoVForCausalLM, GeoVTokenizer
model = GeoVForCausalLM.from_pretrained(
"GeoV/GeoV-9b",
load_in_8bit=True,
low_cpu_mem_usage=True,
device_map='auto',
)
tokenizer = GeoVTokenizer.from_pretrained("GeoV/GeoV-9b")
peft_model_id = "crumb/GeoV-Instruct-LoRA"
model = PeftModel.from_pretrained(model, peft_model_id)
# Inference
prompt = '''
Describe the structure of an atom.
'''
batch = tokenizer(prompt, return_tensors='pt')
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=50)
print(tokenizer.decode(output_tokens[0], skip_special_tokens=True))
``` |
Fred99774/descoledra | Fred99774 | 2023-04-06T02:41:39Z | 30 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-04-06T02:35:17Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### descoledra Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
je1lee/my_awesome_food_model | je1lee | 2023-04-06T02:16:13Z | 218 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-04-05T09:25:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- food101
model-index:
- name: my_awesome_food_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Latanzaa/medusa_queen | Latanzaa | 2023-04-06T01:59:18Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-06T01:59:18Z | ---
license: creativeml-openrail-m
---
|
fangjiaqi/SDV1.5-pruned-emaonly-offical | fangjiaqi | 2023-04-06T01:55:10Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-04T11:22:10Z | ---
license: creativeml-openrail-m
---
|
gsvr30/distilbert-base-uncased-finetuned-cola | gsvr30 | 2023-04-06T01:42:22Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-06T01:33:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5274949902750498
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8492
- Matthews Correlation: 0.5275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5255 | 1.0 | 535 | 0.5222 | 0.4356 |
| 0.3437 | 2.0 | 1070 | 0.5142 | 0.4906 |
| 0.2331 | 3.0 | 1605 | 0.5600 | 0.5052 |
| 0.174 | 4.0 | 2140 | 0.7818 | 0.5059 |
| 0.1332 | 5.0 | 2675 | 0.8492 | 0.5275 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
HaiderAUT/Reinforce-cartpole | HaiderAUT | 2023-04-06T01:24:31Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-06T01:24:28Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -5.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jayeshvpatil/rl_course_vizdoom_health_gathering_supreme | jayeshvpatil | 2023-04-06T01:16:16Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-06T01:16:08Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.61 +/- 4.51
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r jayeshvpatil/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
superustc/ppo-LunarLander-v2 | superustc | 2023-04-06T01:04:11Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-06T01:03:45Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.11 +/- 19.80
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ArisuNguyen/retrained_bart_vn | ArisuNguyen | 2023-04-06T00:58:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-04-05T17:09:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: retrained_bart_vn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# retrained_bart_vn
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
FalconRR/es_pipeline | FalconRR | 2023-04-06T00:36:05Z | 4 | 1 | spacy | [
"spacy",
"text-classification",
"es",
"region:us"
] | text-classification | 2023-04-05T23:51:32Z | ---
tags:
- spacy
- text-classification
language:
- es
model-index:
- name: es_pipeline
results: []
widget:
- text: "¿Qué es lo que te pasa pues a vos, parce?"
metrics:
- accuracy
pipeline_tag: text-classification
---
| Feature | Description |
| --- | --- |
| **Name** | `es_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.1,<3.6.0` |
| **Default Pipeline** | `transformer`, `textcat` |
| **Components** | `transformer`, `textcat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [Falcon Restrepo Ramos]() |
### Label Scheme
<details>
<summary>View label scheme (2 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat`** | `Col`, `Arg` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 83.75 |
| `CATS_MICRO_P` | 83.89 |
| `CATS_MICRO_R` | 83.89 |
| `CATS_MICRO_F` | 83.89 |
| `CATS_MACRO_P` | 83.67 |
| `CATS_MACRO_R` | 83.88 |
| `CATS_MACRO_F` | 83.75 |
| `CATS_MACRO_AUC` | 90.09 |
| `TRANSFORMER_LOSS` | 4738.91 |
| `TEXTCAT_LOSS` | 274.59 | |
decept1on/zephyrs-diffusion-v1 | decept1on | 2023-04-05T23:58:30Z | 33 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-04-05T23:45:04Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### zephyrs-diffusion-v1 Dreambooth model trained by decept1on with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
.jpg)
Changelogs:
+ added 27 instance images of anime girls for further regularization. potential quality improvements idk |
arashiyama/corneos7thHeavenMix_v2 | arashiyama | 2023-04-05T23:35:02Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-05T23:13:17Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/4669/corneos-7th-heaven-mix |
huggingtweets/horalvl_ | huggingtweets | 2023-04-05T22:50:17Z | 138 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-05T22:50:09Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1638561784209584128/m4X40sF0_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">💔</div>
<div style="text-align: center; font-size: 14px;">@horalvl_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 💔.
| Data | 💔 |
| --- | --- |
| Tweets downloaded | 2595 |
| Retweets | 41 |
| Short tweets | 1010 |
| Tweets kept | 1544 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ssyycrj8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @horalvl_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10u91ozq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10u91ozq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/horalvl_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Zlovoblachko/sentiment_parser | Zlovoblachko | 2023-04-05T22:40:28Z | 1 | 3 | spacy | [
"spacy",
"en",
"region:us"
] | null | 2023-03-27T02:26:50Z | ---
tags:
- spacy
language:
- en
model-index:
- name: en_pipeline
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.4,<3.5.0` |
| **Default Pipeline** | `transformer`, `spancat` |
| **Components** | `transformer`, `spancat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`spancat`** | `Collocation calque` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `SPANS_SC_F` | 78.65 |
| `SPANS_SC_P` | 79.55 |
| `SPANS_SC_R` | 77.78 |
| `TRANSFORMER_LOSS` | 7535.29 |
| `SPANCAT_LOSS` | 148493.75 | |
danilyef/a2c-PandaReachDense-v2 | danilyef | 2023-04-05T22:08:26Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-05T21:03:49Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.44 +/- 1.05
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
shikunl/prismer | shikunl | 2023-04-05T22:05:09Z | 0 | 9 | null | [
"image-to-text",
"en",
"license:cc-by-sa-4.0",
"region:us"
] | image-to-text | 2023-02-01T16:00:15Z | ---
license: cc-by-sa-4.0
language:
- en
pipeline_tag: image-to-text
--- |
bryantaekim/bk_text_to_ad | bryantaekim | 2023-04-05T21:57:11Z | 0 | 0 | null | [
"insurance",
"marketing",
"text2text-generation",
"en",
"dataset:bryantaekim/bk_gen_ai",
"license:bigscience-openrail-m",
"region:us"
] | text2text-generation | 2023-04-05T21:25:10Z | ---
license: bigscience-openrail-m
datasets:
- bryantaekim/bk_gen_ai
language:
- en
metrics:
- accuracy
pipeline_tag: text2text-generation
tags:
- insurance
- marketing
--- |
jeffjamesqz/adgenerator | jeffjamesqz | 2023-04-05T21:27:56Z | 0 | 0 | transformers | [
"transformers",
"text-generation",
"license:openrail",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-05T21:08:50Z | ---
license: openrail
library_name: transformers
pipeline_tag: text-generation
--- |
mjbeattie/gcicontracts | mjbeattie | 2023-04-05T21:27:55Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2023-03-28T21:31:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: gcicontracts
results: []
pipeline_tag: summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gcicontracts
This model is a fine-tuned version of [mjbeattie/mjbbillsum](https://huggingface.co/mjbeattie/mjbbillsum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0721
- Rouge1: 0.2917
- Rouge2: 0.1209
- Rougel: 0.2556
- Rougelsum: 0.2535
- Gen Len: 18.1463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 11 | 2.4545 | 0.3004 | 0.1333 | 0.2658 | 0.2637 | 18.2927 |
| No log | 2.0 | 22 | 2.3030 | 0.3047 | 0.1397 | 0.2744 | 0.2709 | 18.2927 |
| No log | 3.0 | 33 | 2.2187 | 0.3065 | 0.1416 | 0.276 | 0.2718 | 18.2439 |
| No log | 4.0 | 44 | 2.1562 | 0.2926 | 0.1209 | 0.2558 | 0.2538 | 18.2439 |
| No log | 5.0 | 55 | 2.1172 | 0.2926 | 0.1209 | 0.2558 | 0.2538 | 18.2439 |
| No log | 6.0 | 66 | 2.0921 | 0.2914 | 0.1209 | 0.2552 | 0.253 | 18.1463 |
| No log | 7.0 | 77 | 2.0786 | 0.2917 | 0.1209 | 0.2556 | 0.2535 | 18.1463 |
| No log | 8.0 | 88 | 2.0721 | 0.2917 | 0.1209 | 0.2556 | 0.2535 | 18.1463 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.0
- Tokenizers 0.11.0 |
Tingli/bert-base-banking77-pt2 | Tingli | 2023-04-05T21:26:54Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:banking77",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-05T20:28:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
config: default
split: test
args: default
metrics:
- name: F1
type: f1
value: 0.9292103144277876
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2982
- F1: 0.9292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0831 | 1.0 | 626 | 0.8018 | 0.8336 |
| 0.381 | 2.0 | 1252 | 0.3600 | 0.9206 |
| 0.1832 | 3.0 | 1878 | 0.2982 | 0.9292 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.0+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
|
jmurphy97/ppo-LunarLander-v2 | jmurphy97 | 2023-04-05T21:24:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-05T21:24:33Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 247.00 +/- 29.15
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
petebrooks/GenerAd-AI | petebrooks | 2023-04-05T21:12:27Z | 6 | 0 | adapter-transformers | [
"adapter-transformers",
"art",
"text-generation",
"en",
"dataset:FourthBrainGenAI/Product-Descriptions-and-Ads",
"license:openrail",
"region:us"
] | text-generation | 2023-04-05T21:00:26Z | ---
license: openrail
datasets:
- FourthBrainGenAI/Product-Descriptions-and-Ads
language:
- en
library_name: adapter-transformers
pipeline_tag: text-generation
tags:
- art
--- |
data-corentinv/bloom-fourthbrain-hackathon-v2-1b7-lora-ads | data-corentinv | 2023-04-05T21:11:04Z | 0 | 0 | transformers | [
"transformers",
"text2text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-04-05T21:05:52Z | ---
license: mit
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text2text-generation
--- |
johngiorgi/led-base-16384 | johngiorgi | 2023-04-05T20:56:38Z | 96 | 0 | transformers | [
"transformers",
"pytorch",
"led",
"text2text-generation",
"summarization",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-04-05T20:48:38Z | ---
language: en
license: apache-2.0
pipeline_tag: summarization
---
# Model Card
This model is identical to [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384), except the `generation_config.json` has been updated from:
```json
{
"_from_model_config": true,
"bos_token_id": 0,
"decoder_start_token_id": 2,
"eos_token_id": 2,
"pad_token_id": 1
}
```
to
```json
{
"bos_token_id": 0,
"decoder_start_token_id": 2,
"eos_token_id": 2,
"pad_token_id": 1,
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 512,
"min_length": 100,
"no_repeat_ngram_size": 3,
"num_beams": 4
}
```
which we found to be much more stable when fine-tuning the model for summarization tasks. |
RomanTeucher/PythonCoder | RomanTeucher | 2023-04-05T20:50:12Z | 5 | 2 | adapter-transformers | [
"adapter-transformers",
"code",
"python",
"text-generation",
"en",
"dataset:RomanTeucher/awesome_topic_code_snippets",
"license:openrail",
"region:us"
] | text-generation | 2023-04-05T19:57:41Z | ---
license: openrail
datasets:
- RomanTeucher/awesome_topic_code_snippets
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: text-generation
tags:
- code
- python
--- |
Zekunli/flan-t5-base-da-multiwoz2.0_400-loss-ep100 | Zekunli | 2023-04-05T20:45:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-04-05T16:31:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: flan-t5-base-da-multiwoz2.0_400-loss-ep100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-da-multiwoz2.0_400-loss-ep100
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3741
- Accuracy: 39.1797
- Num: 7358
- Gen Len: 15.6147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 80
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:|
| 1.1208 | 2.33 | 400 | 0.5132 | 26.0596 | 7358 | 14.302 |
| 0.553 | 4.65 | 800 | 0.4287 | 33.6512 | 7358 | 15.3968 |
| 0.4783 | 6.98 | 1200 | 0.4007 | 35.3232 | 7358 | 15.8898 |
| 0.4379 | 9.3 | 1600 | 0.3908 | 36.7949 | 7358 | 15.5749 |
| 0.4097 | 11.63 | 2000 | 0.3851 | 36.8451 | 7358 | 16.4447 |
| 0.3859 | 13.95 | 2400 | 0.3770 | 37.9797 | 7358 | 16.2493 |
| 0.3675 | 16.28 | 2800 | 0.3741 | 39.2162 | 7358 | 16.0883 |
| 0.3519 | 18.6 | 3200 | 0.3741 | 39.1797 | 7358 | 15.6147 |
| 0.34 | 20.93 | 3600 | 0.3757 | 40.1516 | 7358 | 15.8101 |
| 0.3277 | 23.26 | 4000 | 0.3774 | 40.2096 | 7358 | 15.8341 |
| 0.3181 | 25.58 | 4400 | 0.3755 | 40.3496 | 7358 | 15.4981 |
| 0.3063 | 27.91 | 4800 | 0.3782 | 40.6828 | 7358 | 15.5501 |
| 0.2934 | 30.23 | 5200 | 0.3831 | 40.8427 | 7358 | 15.8903 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
yilin1344/GenerOasisLyricsOne-AI | yilin1344 | 2023-04-05T20:44:17Z | 0 | 0 | transformers | [
"transformers",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2023-04-05T19:36:42Z | ---
license: openrail
language:
- en
library_name: transformers
--- |
MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c | MoritzLaurer | 2023-04-05T20:40:03Z | 3,157 | 10 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"en",
"arxiv:2104.07179",
"arxiv:2106.09449",
"arxiv:2006.03654",
"arxiv:2111.09543",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- en
license: mit
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
widget:
- text: "I first thought that I liked the movie, but upon second thought it was actually disappointing. [SEP] The movie was good."
---
# DeBERTa-v3-base-mnli-fever-docnli-ling-2c
## Model description
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to enable the inclusion of the DocNLI dataset.
The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf) as well as the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543).
For highest performance (but less speed), I recommend using https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli.
### How to use the model
#### Simple zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c")
sequence_to_classify = "Angela Merkel is a politician in Germany and leader of the CDU"
candidate_labels = ["politics", "economy", "entertainment", "environment"]
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)
```
#### NLI use-case
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "not_entailment"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
### Training procedure
DeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c | lingnli-2c
---------|----------|---------|----------|----------|------
0.935 | 0.933 | 0.897 | 0.710 | 0.678 | 0.895
## Limitations and bias
Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
## Citation
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Debugging and issues
Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues. |
bayartsogt/roberta-base-ner-demo | bayartsogt | 2023-04-05T20:37:24Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"mn",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-07-01T03:49:12Z | ---
language:
- mn
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-ner-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ner-demo
This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0833
- Precision: 0.8885
- Recall: 0.9070
- F1: 0.8976
- Accuracy: 0.9752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1666 | 1.0 | 477 | 0.0833 | 0.8885 | 0.9070 | 0.8976 | 0.9752 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
martomor/oasis-bloom | martomor | 2023-04-05T20:27:06Z | 0 | 0 | transformers | [
"transformers",
"text-generation",
"en",
"dataset:tthoraldson/OasisLyrics",
"license:openrail",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-05T19:32:02Z | ---
license: openrail
datasets:
- tthoraldson/OasisLyrics
language:
- en
library_name: transformers
pipeline_tag: text-generation
--- |
tthoraldson/oasis-bloom | tthoraldson | 2023-04-05T20:26:02Z | 0 | 0 | transformers | [
"transformers",
"text-generation",
"en",
"dataset:tthoraldson/OasisLyrics",
"license:openrail",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-05T20:01:43Z | ---
license: openrail
datasets:
- tthoraldson/OasisLyrics
language:
- en
widget:
- text: Oasis Song Name
pipeline_tag: text-generation
library_name: transformers
--- |
cyclonetrue/Taxi-v3-model | cyclonetrue | 2023-04-05T20:25:26Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-05T20:25:23Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-model
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cyclonetrue/Taxi-v3-model", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
worknick/opt-125m-tldr | worknick | 2023-04-05T20:22:40Z | 195 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-03T07:26:53Z | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: opt-125m-tldr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-125m-tldr
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7992 | 0.07 | 1000 | 2.7158 |
| 2.7437 | 0.14 | 2000 | 2.6938 |
| 2.732 | 0.21 | 3000 | 2.6797 |
| 2.7157 | 0.27 | 4000 | 2.6691 |
| 2.7071 | 0.34 | 5000 | 2.6620 |
| 2.6998 | 0.41 | 6000 | 2.6557 |
| 2.696 | 0.48 | 7000 | 2.6495 |
| 2.6902 | 0.55 | 8000 | 2.6451 |
| 2.6791 | 0.62 | 9000 | 2.6408 |
| 2.6823 | 0.69 | 10000 | 2.6379 |
| 2.6806 | 0.75 | 11000 | 2.6345 |
| 2.6746 | 0.82 | 12000 | 2.6330 |
| 2.6765 | 0.89 | 13000 | 2.6306 |
| 2.6738 | 0.96 | 14000 | 2.6296 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2
|
SvenL1975/ppo-LunarLander-v2 | SvenL1975 | 2023-04-05T20:14:47Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-05T20:14:13Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 88.13 +/- 109.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kanak8278/electra-base-ner-food-recipe | kanak8278 | 2023-04-05T20:04:42Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-04-05T18:54:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electra-base-ner-food-recipe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-ner-food-recipe
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1889
- Precision: 0.7866
- Recall: 0.8144
- F1: 0.8003
- Accuracy: 0.9558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0216 | 2.66 | 2121 | 0.1672 | 0.7858 | 0.8183 | 0.8017 | 0.9575 |
| 0.0237 | 5.33 | 4242 | 0.1744 | 0.7842 | 0.8122 | 0.7980 | 0.9564 |
| 0.0281 | 7.99 | 6363 | 0.1793 | 0.7812 | 0.8148 | 0.7976 | 0.9558 |
| 0.0236 | 10.66 | 8484 | 0.1863 | 0.7923 | 0.8148 | 0.8034 | 0.9567 |
| 0.0246 | 13.32 | 10605 | 0.1881 | 0.7871 | 0.8170 | 0.8018 | 0.9561 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
SaathvikD/dqn-BreakoutNoFrameskip-v4 | SaathvikD | 2023-04-05T19:52:16Z | 0 | 1 | stable-baselines3 | [
"stable-baselines3",
"BreakoutNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-05T19:51:18Z | ---
library_name: stable-baselines3
tags:
- BreakoutNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BreakoutNoFrameskip-v4
type: BreakoutNoFrameskip-v4
metrics:
- type: mean_reward
value: 225.50 +/- 91.46
name: mean_reward
verified: false
---
# **DQN** Agent playing **BreakoutNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **BreakoutNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env BreakoutNoFrameskip-v4 -orga SaathvikD -f logs/
python -m rl_zoo3.enjoy --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env BreakoutNoFrameskip-v4 -orga SaathvikD -f logs/
python -m rl_zoo3.enjoy --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env BreakoutNoFrameskip-v4 -f logs/ -orga SaathvikD
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.2),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
akadhim-ai/sd_martin_valen-model-v1-2_400 | akadhim-ai | 2023-04-05T19:41:49Z | 31 | 0 | diffusers | [
"diffusers",
"tensorboard",
"art",
"text-to-image",
"en",
"dataset:Ali-fb/martin_valen_dataset",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-04-05T18:47:00Z | ---
license: openrail
datasets:
- Ali-fb/martin_valen_dataset
language:
- en
metrics:
- accuracy
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
--- |
sandeepvarma99/bert-base-uncased-finetuned-squad | sandeepvarma99 | 2023-04-05T19:40:08Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-04-05T08:20:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.041 | 1.0 | 7377 | 0.9949 |
| 0.7002 | 2.0 | 14754 | 1.0049 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
verderis/reinforce-heli-01 | verderis | 2023-04-05T19:33:48Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-05T19:33:23Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-heli-01
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 12.30 +/- 11.52
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
osman93/ppo-PyramidsTESTCOLAB | osman93 | 2023-04-05T19:17:03Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-04-05T19:16:58Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: osman93/ppo-PyramidsTESTCOLAB
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ManarAli/a2c-PandaReachDense-v2 | ManarAli | 2023-04-05T19:05:32Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-05T12:00:07Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.64 +/- 0.38
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
muellerzr/performance-debugging | muellerzr | 2023-04-05T18:58:20Z | 0 | 0 | null | [
"dataset:glue",
"license:apache-2.0",
"region:us"
] | null | 2023-04-05T16:46:36Z | ---
license: apache-2.0
datasets:
- glue
metrics:
- glue
---
# Performance Debugging
Uses `aim` and `accelerate` to ensure that we can get as close as possible when training on a single GPU and multiple GPU in terms of performance, and what that actually looks like |
alesthehuman/q-Taxi-v3 | alesthehuman | 2023-04-05T18:42:09Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-05T18:41:41Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="alesthehuman/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
HASAN55/bert-finetuned-squad-epochs | HASAN55 | 2023-04-05T18:33:43Z | 72 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-04-05T14:10:46Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: HASAN55/bert-finetuned-squad-epochs
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# HASAN55/bert-finetuned-squad-epochs
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7780
- Train End Logits Accuracy: 0.7786
- Train Start Logits Accuracy: 0.7388
- Validation Loss: 2.4426
- Validation End Logits Accuracy: 0.4155
- Validation Start Logits Accuracy: 0.4069
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.2707 | 0.6660 | 0.6248 | 2.2867 | 0.4351 | 0.4215 | 0 |
| 0.7780 | 0.7786 | 0.7388 | 2.4426 | 0.4155 | 0.4069 | 1 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
kenzo4433/poca-SoccerTwos | kenzo4433 | 2023-04-05T18:28:27Z | 15 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-04-05T18:10:08Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: kenzo4433/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
allenai/longformer-base-4096 | allenai | 2023-04-05T18:24:00Z | 3,175,159 | 185 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"longformer",
"en",
"arxiv:2004.05150",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
license: apache-2.0
---
# longformer-base-4096
[Longformer](https://arxiv.org/abs/2004.05150) is a transformer model for long documents.
`longformer-base-4096` is a BERT-like model started from the RoBERTa checkpoint and pretrained for MLM on long documents. It supports sequences of length up to 4,096.
Longformer uses a combination of a sliding window (local) attention and global attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations.
Please refer to the examples in `modeling_longformer.py` and the paper for more details on how to set global attention.
### Citing
If you use `Longformer` in your research, please cite [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150).
```
@article{Beltagy2020Longformer,
title={Longformer: The Long-Document Transformer},
author={Iz Beltagy and Matthew E. Peters and Arman Cohan},
journal={arXiv:2004.05150},
year={2020},
}
```
`Longformer` is an open-source project developed by [the Allen Institute for Artificial Intelligence (AI2)](http://www.allenai.org).
AI2 is a non-profit institute with the mission to contribute to humanity through high-impact AI research and engineering. |
jamiehudson/625-model-brand-rem-jh2 | jamiehudson | 2023-04-05T18:15:05Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-04-05T18:14:49Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# 625-model-brand-rem-jh2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("625-model-brand-rem-jh2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
ucberkeley-dlab/hate-measure-roberta-base | ucberkeley-dlab | 2023-04-05T18:12:37Z | 5 | 0 | tf-keras | [
"tf-keras",
"text-classification",
"hate-speech",
"counterspeech",
"irt",
"arxiv:2009.10277",
"en",
"dataset:ucberkeley-dlab/measuring-hate-speech",
"region:us"
] | text-classification | 2023-04-05T17:46:03Z | ---
language:
- en
tags:
- text-classification
- hate-speech
- counterspeech
- irt
- arxiv:2009.10277
datasets:
- ucberkeley-dlab/measuring-hate-speech
---
# Measuring Hate Speech: RoBERTa-Base
This model predicts a continuous hate speech score as described in Kennedy et al. (2020).
## Citation
```
@article{kennedy2020constructing,
title={Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application},
author={Kennedy, Chris J and Bacon, Geoff and Sahn, Alexander and von Vacano, Claudia},
journal={arXiv preprint arXiv:2009.10277},
year={2020}
}
```
## References
Kennedy, C. J., Bacon, G., Sahn, A., & von Vacano, C. (2020). [Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application](https://arxiv.org/abs/2009.10277). arXiv preprint arXiv:2009.10277. |
vocabtrimmer/mbart-large-cc25-trimmed-fr-frquad-qa | vocabtrimmer | 2023-04-05T18:09:09Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"question answering",
"fr",
"dataset:lmqg/qg_frquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-04-05T18:03:27Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: fr
datasets:
- lmqg/qg_frquad
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: "question: En quelle année a-t-on trouvé trace d'un haut fourneau similaire?, context: Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu."
example_title: "Question Answering Example 1"
- text: "question: Comment appelle-t-on la Guerre de 14-18 ?, context: Ce black dog peut être lié à des évènements traumatisants issus du monde extérieur, tels que son renvoi de l'Amirauté après la catastrophe des Dardanelles, lors de la Grande Guerre de 14-18, ou son rejet par l'électorat en juillet 1945. On sait également que dans ces deux cas, la guérison, certes lente et douloureuse et jamais complète ni définitive, se fera grâce à la peinture. D'un autre côté, étant donnés les symptômes de ce mal que Churchill éprouvait de plus en plus, il ne pouvait rien moins qu'être purement associé à de telles causes extrinsèques, ce qui correspond au profil classique de la dépression majeure unipolaire ou bipolaire."
example_title: "Question Answering Example 2"
model-index:
- name: vocabtrimmer/mbart-large-cc25-trimmed-fr-frquad-qa
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_frquad
type: default
args: default
metrics:
- name: BLEU4 (Question Answering)
type: bleu4_question_answering
value: 31.61
- name: ROUGE-L (Question Answering)
type: rouge_l_question_answering
value: 41.11
- name: METEOR (Question Answering)
type: meteor_question_answering
value: 32.95
- name: BERTScore (Question Answering)
type: bertscore_question_answering
value: 93.48
- name: MoverScore (Question Answering)
type: moverscore_question_answering
value: 79.52
- name: AnswerF1Score (Question Answering)
type: answer_f1_score__question_answering
value: 66.37
- name: AnswerExactMatch (Question Answering)
type: answer_exact_match_question_answering
value: 45.11
---
# Model Card of `vocabtrimmer/mbart-large-cc25-trimmed-fr-frquad-qa`
This model is fine-tuned version of [vocabtrimmer/mbart-large-cc25-trimmed-fr](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-fr) for question answering task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [vocabtrimmer/mbart-large-cc25-trimmed-fr](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-fr)
- **Language:** fr
- **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="fr", model="vocabtrimmer/mbart-large-cc25-trimmed-fr-frquad-qa")
# model prediction
answers = model.answer_q(list_question="En quelle année a-t-on trouvé trace d'un haut fourneau similaire?", list_context=" Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mbart-large-cc25-trimmed-fr-frquad-qa")
output = pipe("question: En quelle année a-t-on trouvé trace d'un haut fourneau similaire?, context: Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-fr-frquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_frquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 45.11 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| AnswerF1Score | 66.37 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| BERTScore | 93.48 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_1 | 42.71 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_2 | 37.89 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_3 | 34.5 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_4 | 31.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| METEOR | 32.95 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| MoverScore | 79.52 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| ROUGE_L | 41.11 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_frquad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: vocabtrimmer/mbart-large-cc25-trimmed-fr
- max_length: 512
- max_length_output: 32
- epoch: 11
- batch: 8
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 16
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-fr-frquad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
alexgrigoras/whisper-small-ro | alexgrigoras | 2023-04-05T18:04:31Z | 73 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ro",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-04-05T14:16:15Z | ---
language:
- ro
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Romanian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Romanian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 Romanian Dataset dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2164
- eval_wer: 94.3222
- eval_runtime: 2993.0092
- eval_samples_per_second: 1.289
- eval_steps_per_second: 0.161
- epoch: 1.8
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
kenzo4433/rl_course_vizdoom_health_gathering_supreme | kenzo4433 | 2023-04-05T17:56:11Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-05T17:56:02Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.08 +/- 4.66
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r kenzo4433/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
poplkl/distilbert-base-uncased-finetuned-imdb | poplkl | 2023-04-05T17:51:58Z | 124 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-04-05T17:35:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
alkiskoudounas/ppo-PyramidsRND1 | alkiskoudounas | 2023-04-05T17:46:32Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-04-05T17:46:26Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: alkiskoudounas/ppo-PyramidsRND1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
IchtacaKemeRaz/favabean | IchtacaKemeRaz | 2023-04-05T17:37:01Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gptj",
"text-generation",
"text generation",
"conversational",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-04-05T17:37:01Z | ---
license: creativeml-openrail-m
language:
- en
thumbnail: null
tags:
- text generation
- conversational
inference: false
duplicated_from: PygmalionAI/pygmalion-6b
---
# Pygmalion 6B
## Model description
Pymalion 6B is a proof-of-concept dialogue model based on EleutherAI's [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B).
**Warning:** This model is **NOT** suitable for use by minors. It **will** output X-rated content under certain circumstances.
## Training data
The fine-tuning dataset consisted of 56MB of dialogue data gathered from multiple sources, which includes both real _and_ partially machine-generated conversations.
## Training procedure
Model weights were initialized from the `uft-6b` ConvoGPT model made available in [this commit](https://huggingface.co/hakurei/convogpt/tree/41b67bfddb6cd97070ffddf708e9720c9cb8d224/6b-uft).
The model was then further fine-tuned on ~48.5 million tokens for ~5k steps on 4 NVIDIA A40s using DeepSpeed.
## Intended use
### The easy way
We provide a notebook with a Gradio UI for playing around with the model without having to manually format inputs. This notebook can be found [here](https://github.com/PygmalionAI/gradio-ui/blob/master/notebooks/GPU.ipynb).
### The manual way
The model can be used as a regular text generation model, but it'll perform best if the input prompt adheres to the following format:
```
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
<START>
[DIALOGUE HISTORY]
You: [Your input message here]
[CHARACTER]:
```
Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is chat history so the model can have some conversational context to draw from. Ideally it'll be pairs of messages like:
```
[CHARACTER]: [some dialogue here]
You: [your response to the dialogue above]
```
Apart from chat history, you can also just add example conversations in `[DIALOGUE HISTORY]` to show how the character should speak - ideally at the beginning, so it doesn't get confused as to what's conversation history vs. character definition.
## Known issues
We haven't played around with the model enough to enumerate them. Feel free to give us some feedback!
|
inkasaras/rl_course_vizdoom_health_gathering_supreme | inkasaras | 2023-04-05T17:28:49Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-05T17:28:24Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.86 +/- 6.00
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r inkasaras/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
bsenst/Reinforce-CartPole-v1 | bsenst | 2023-04-05T17:19:01Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-05T14:39:13Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 144.30 +/- 6.33
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Subsets and Splits