modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 12:29:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 548
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 12:29:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
danielv835/PF_Coach
|
danielv835
| 2023-05-09T19:29:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-08T15:21:40Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for PF_Coach
Coach (final) models for the PF Coach project for the Tribe LLM Hackathon, May 2023
Current version is for integration test.
## Model Details
### Model Description
This model is based on OPT-1.3B, after brief DeepSpeed instruction following finetuning and RLHF on a generic (non PF) dataset.
- **Developed by:** Daniel Vainsencher
- **Model type:** Causal language modeling, tuned for chat.
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** facebook/OPT-1.3B
### Model Sources [optional]
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ratish/DBERT_CleanDesc_MAKE_v12
|
ratish
| 2023-05-09T19:21:07Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-09T19:13:38Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ratish/DBERT_CleanDesc_MAKE_v12
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ratish/DBERT_CleanDesc_MAKE_v12
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Validation Loss: nan
- Train Accuracy: 0.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| nan | nan | 0.0 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Multi-Domain-Expert-Learning/expert-min-pile-instruct
|
Multi-Domain-Expert-Learning
| 2023-05-09T19:08:57Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-06T16:15:06Z |
---
tags:
- generated_from_trainer
datasets:
- pile-instruct/
metrics:
- accuracy
model-index:
- name: layer_4,5,6,7,8
results:
- task:
type: text-generation
name: Causal Language Modeling
dataset:
name: pile-instruct/
type: pile-instruct/
split: None
metrics:
- type: accuracy
value: 0.3842425129408517
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layer_4,5,6,7,8
This model is a fine-tuned version of [P1ayer-1/pythia-deduped-1b-chat-base](https://huggingface.co/P1ayer-1/pythia-deduped-1b-chat-base) on the pile-instruct/ dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9648
- Accuracy: 0.3842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 96
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 6000
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 7.4574 | 0.1 | 200 | 0.1688 | 7.4961 |
| 7.0445 | 0.2 | 400 | 0.1997 | 7.0547 |
| 6.7483 | 0.3 | 600 | 0.2190 | 6.7930 |
| 6.4568 | 0.4 | 800 | 0.2376 | 6.5703 |
| 6.2865 | 0.5 | 1000 | 0.2552 | 6.375 |
| 6.1028 | 0.6 | 1200 | 0.2793 | 6.1484 |
| 5.8888 | 0.7 | 1400 | 0.2982 | 5.9570 |
| 5.7362 | 0.8 | 1600 | 0.3121 | 5.8008 |
| 5.6507 | 0.9 | 1800 | 0.3238 | 5.6797 |
| 5.565 | 1.0 | 2000 | 0.3318 | 5.5781 |
| 5.4688 | 1.1 | 2200 | 0.3392 | 5.4961 |
| 5.4044 | 1.2 | 2400 | 0.3456 | 5.4219 |
| 5.3323 | 1.3 | 2600 | 0.3516 | 5.3594 |
| 5.2598 | 1.4 | 2800 | 0.3562 | 5.3047 |
| 5.2159 | 1.5 | 3000 | 0.3596 | 5.2578 |
| 5.1992 | 1.6 | 3200 | 0.3638 | 5.2148 |
| 5.1429 | 1.69 | 3400 | 0.3672 | 5.1797 |
| 5.095 | 1.79 | 3600 | 0.3696 | 5.1445 |
| 5.0646 | 1.89 | 3800 | 0.3715 | 5.1172 |
| 5.059 | 1.99 | 4000 | 0.3742 | 5.0859 |
| 5.0152 | 2.09 | 4200 | 0.3756 | 5.0664 |
| 5.0251 | 2.19 | 4400 | 0.3769 | 5.0469 |
| 5.022 | 2.29 | 4600 | 0.3790 | 5.0273 |
| 4.9939 | 2.39 | 4800 | 0.3798 | 5.0156 |
| 4.924 | 2.49 | 5000 | 0.3811 | 5.0 |
| 4.9335 | 2.59 | 5200 | 0.3821 | 4.9883 |
| 4.9231 | 2.69 | 5400 | 0.3829 | 4.9805 |
| 4.8886 | 2.79 | 5600 | 4.9727 | 0.3835 |
| 4.9419 | 2.89 | 5800 | 4.9648 | 0.3843 |
| 4.9227 | 2.99 | 6000 | 4.9648 | 0.3842 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
## Wandb Report
https://wandb.ai/ontocord/pythia-1b-deduped-layer-test-min-pile-instruct/runs/kqlipkt3
|
Vailla-Rohit/bart-base-finetuned-samsum
|
Vailla-Rohit
| 2023-05-09T18:57:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-09T18:56:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: test-dialogue-summarization
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 48.0348
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-dialogue-summarization
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5507
- Rouge1: 48.0348
- Rouge2: 24.8215
- Rougel: 40.5048
- Rougelsum: 44.3467
- Gen Len: 18.1638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 460 | 1.5507 | 48.0348 | 24.8215 | 40.5048 | 44.3467 | 18.1638 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lucakuehne/mobilenetv3-imagenet-rev1
|
lucakuehne
| 2023-05-09T18:53:06Z | 4 | 0 |
tf-keras
|
[
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] |
image-classification
| 2023-05-09T18:43:51Z |
---
pipeline_tag: image-classification
---
|
openaccess-ai-collective/llama-13b-alpaca-wizard-vicuna
|
openaccess-ai-collective
| 2023-05-09T18:06:55Z | 11 | 5 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:vicgalle/alpaca-gpt4",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-01T19:54:33Z |
---
license: apache-2.0
datasets:
- vicgalle/alpaca-gpt4
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# openaccess-ai-collective/llama-13b-alpaca-wizard
## Trained
- `vicgalle/alpaca-gpt4` 1 epoch, learning rate 3e-5 https://wandb.ai/wing-lian/wizard-vicuna-gpt4/overview
- `deepspeed scripts/finetune.py configs/axolotl/wizard-vicuna-13b-step1.yml --deepspeed configs/ds_config.json --num_epochs 2 --warmup_steps 46 --logging_steps 1 --save_steps 23`
- `wizardlm` https://wandb.ai/wing-lian/wizard-vicuna-gpt4/runs/4y38knw4
- `deepspeed scripts/finetune.py configs/axolotl/wizard-vicuna-13b-step2.yml --deepspeed configs/ds_config-step2.json --num_epochs 2 --logging_steps 1`
- `vicuna` TBD
<pre>Brought to you by the OpenAccess AI Collective</pre>
|
jhutchinson25/ppo-LunarLander-v2
|
jhutchinson25
| 2023-05-09T18:03:23Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-09T18:03:00Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.05 +/- 17.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
harvinder676/bert-news
|
harvinder676
| 2023-05-09T18:02:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-09T17:44:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-news
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7548 | 1.0 | 1531 | 2.6146 |
| 2.6217 | 2.0 | 3062 | 2.5512 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
muhammadravi251001/fine-tuned-DatasetQAS-IDK-MRC-with-xlm-roberta-large-with-ITTL-with-freeze-LR-1e-05
|
muhammadravi251001
| 2023-05-09T17:59:05Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-07T13:05:44Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fine-tuned-DatasetQAS-IDK-MRC-with-xlm-roberta-large-with-ITTL-with-freeze-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-DatasetQAS-IDK-MRC-with-xlm-roberta-large-with-ITTL-with-freeze-LR-1e-05
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8698
- Exact Match: 74.6073
- F1: 81.6214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| 6.2825 | 0.49 | 36 | 2.2341 | 49.2147 | 49.3071 |
| 3.465 | 0.98 | 72 | 1.8139 | 49.2147 | 49.4968 |
| 1.9165 | 1.48 | 108 | 1.3110 | 50.6545 | 59.1184 |
| 1.9165 | 1.97 | 144 | 0.9907 | 65.0524 | 72.4023 |
| 1.2487 | 2.46 | 180 | 0.9051 | 68.1937 | 75.7323 |
| 0.9426 | 2.95 | 216 | 0.8485 | 67.8010 | 75.3684 |
| 0.8069 | 3.45 | 252 | 0.8499 | 70.0262 | 77.7548 |
| 0.8069 | 3.94 | 288 | 0.9202 | 67.5393 | 74.8123 |
| 0.7193 | 4.44 | 324 | 0.7897 | 73.0366 | 79.9552 |
| 0.6234 | 4.92 | 360 | 0.7973 | 73.6911 | 80.5009 |
| 0.6234 | 5.42 | 396 | 0.8353 | 72.9058 | 80.2879 |
| 0.5583 | 5.91 | 432 | 0.8392 | 73.4293 | 80.6345 |
| 0.5263 | 6.41 | 468 | 0.8477 | 73.5602 | 81.0016 |
| 0.4642 | 6.9 | 504 | 0.8355 | 74.6073 | 81.7391 |
| 0.4642 | 7.39 | 540 | 0.8383 | 73.5602 | 81.1187 |
| 0.4381 | 7.88 | 576 | 0.8828 | 73.0366 | 79.8504 |
| 0.4099 | 8.38 | 612 | 0.8698 | 74.6073 | 81.6214 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector
|
JonatanGk
| 2023-05-09T17:53:13Z | 12 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"spanish",
"es",
"dataset:catalonia_independence",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
language: es
tags:
- spanish
datasets:
- catalonia_independence
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: catalonia_independence
type: catalonia_independence
args: spanish
metrics:
- name: Accuracy
type: accuracy
value: 0.7880893300248138
- task:
type: text-classification
name: Text Classification
dataset:
name: catalonia_independence
type: catalonia_independence
config: catalan
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.4592039800995025
verified: true
- name: Precision Macro
type: precision
value: 0.6104489964825159
verified: true
- name: Precision Micro
type: precision
value: 0.4592039800995025
verified: true
- name: Precision Weighted
type: precision
value: 0.6167123723406555
verified: true
- name: Recall Macro
type: recall
value: 0.4146479268294389
verified: true
- name: Recall Micro
type: recall
value: 0.4592039800995025
verified: true
- name: Recall Weighted
type: recall
value: 0.4592039800995025
verified: true
- name: F1 Macro
type: f1
value: 0.33416407167650636
verified: true
- name: F1 Micro
type: f1
value: 0.4592039800995025
verified: true
- name: F1 Weighted
type: f1
value: 0.34549318538357193
verified: true
- name: loss
type: loss
value: 3.393402099609375
verified: true
widget:
- text: "Junqueras, sobre la decisi\xF3n judicial sobre Puigdemont: La justicia que\
\ falta en el Estado llega y llegar\xE1 de Europa"
- text: "Desconvocada la manifestaci\xF3n del domingo en Barcelona en apoyo a Puigdemont"
---
# roberta-base-bne-finetuned-catalonia-independence-detector
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the catalonia_independence dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9415
- Accuracy: 0.7881
<details>
## Model description
The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.
Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 378 | 0.5534 | 0.7558 |
| 0.6089 | 2.0 | 756 | 0.5315 | 0.7643 |
| 0.2678 | 3.0 | 1134 | 0.7336 | 0.7816 |
| 0.0605 | 4.0 | 1512 | 0.8809 | 0.7866 |
| 0.0605 | 5.0 | 1890 | 0.9415 | 0.7881 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector"
independence_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
independence_analysis(
"Junqueras, sobre la decisión judicial sobre Puigdemont: La justicia que falta en el Estado llega y llegará de Europa"
)
# Output:
[{'label': 'FAVOR', 'score': 0.9936726093292236}]
independence_analysis(
"El desafío independentista queda adormecido, y eso que el Gobierno ha sido muy claro en que su propuesta para Cataluña es una agenda de reencuentro, centrada en inversiones e infraestructuras")
# Output:
[{'label': 'AGAINST', 'score': 0.7508948445320129}]
independence_analysis(
"Desconvocada la manifestación del domingo en Barcelona en apoyo a Puigdemont"
)
# Output:
[{'label': 'NEUTRAL', 'score': 0.9966907501220703}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Catalonia_independence_Detector_(SPANISH).ipynb#scrollTo=uNMOXJz38W6U)
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
## Citation
Thx to HF.co & [@lewtun](https://github.com/lewtun) for Dataset ;)
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/)
|
JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish
|
JonatanGk
| 2023-05-09T17:51:31Z | 45 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"spanish",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: es
tags:
- "spanish"
metrics:
- accuracy
widget:
- text: "Eres mas pequeño que un pitufo!"
- text: "Eres muy feo!"
- text: "Odio tu forma de hablar!"
- text: "Eres tan fea que cuando eras pequeña te echaban de comer por debajo de la puerta."
---
# roberta-base-bne-finetuned-ciberbullying-spanish
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the dataset generated scrapping all social networks (Twitter, Youtube ...) to detect ciberbullying on Spanish.
It achieves the following results on the evaluation set:
- Loss: 0.1657
- Accuracy: 0.9607
## Training and evaluation data
I use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 360k sentences.
## Training procedure
<details>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.1512 | 1.0 | 22227 | 0.9501 | 0.1418 |
| 0.1253 | 2.0 | 44454 | 0.9567 | 0.1499 |
| 0.0973 | 3.0 | 66681 | 0.9594 | 0.1397 |
| 0.0658 | 4.0 | 88908 | 0.9607 | 0.1657 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-bne-finetuned-ciberbullying-spanish"
bullying_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
bullying_analysis(
"Desde que te vi me enamoré de ti."
)
# Output:
[{'label': 'Not_bullying', 'score': 0.9995710253715515}]
bullying_analysis(
"Eres tan fea que cuando eras pequeña te echaban de comer por debajo de la puerta."
)
# Output:
[{'label': 'Bullying', 'score': 0.9918262958526611}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Cyberbullying_detection_(SPANISH).ipynb)
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/)
|
dxli/berry_bowl
|
dxli
| 2023-05-09T17:31:12Z | 33 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T08:19:37Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/berry_bowl
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
directtt/wine-reviews-roberta
|
directtt
| 2023-05-09T17:28:12Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-09T17:26:11Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: wine-reviews-roberta
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# wine-reviews-roberta
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2715
- Train Acc: 0.8906
- Validation Loss: 0.6536
- Validation Acc: 0.7701
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 24455, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Acc | Validation Loss | Validation Acc | Epoch |
|:----------:|:---------:|:---------------:|:--------------:|:-----:|
| 0.6164 | 0.7297 | 0.5360 | 0.7665 | 0 |
| 0.5040 | 0.7820 | 0.5145 | 0.7739 | 1 |
| 0.4248 | 0.8206 | 0.5470 | 0.7744 | 2 |
| 0.3413 | 0.8583 | 0.6132 | 0.7699 | 3 |
| 0.2715 | 0.8906 | 0.6536 | 0.7701 | 4 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Mael7307/sbiobert
|
Mael7307
| 2023-05-09T17:18:19Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-09T16:39:01Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
YashGajjar/Reinforce-CartPole-v1
|
YashGajjar
| 2023-05-09T17:15:38Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-09T17:15:28Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
reyhanemyr/bert-base-uncased-finetuned-recruitment-exp
|
reyhanemyr
| 2023-05-09T17:09:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-09T16:59:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased-finetuned-recruitment-exp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-recruitment-exp
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1433
- Precision: 0.6113
- Recall: 0.7250
- F1: 0.6633
- Accuracy: 0.9612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 17 | 0.5254 | 0.0 | 0.0 | 0.0 | 0.8991 |
| No log | 2.0 | 34 | 0.3199 | 0.2854 | 0.2255 | 0.2519 | 0.9152 |
| No log | 3.0 | 51 | 0.2303 | 0.3948 | 0.5232 | 0.4500 | 0.9370 |
| No log | 4.0 | 68 | 0.1878 | 0.4876 | 0.6212 | 0.5463 | 0.9496 |
| No log | 5.0 | 85 | 0.1630 | 0.5544 | 0.6548 | 0.6005 | 0.9553 |
| No log | 6.0 | 102 | 0.1433 | 0.6113 | 0.7250 | 0.6633 | 0.9612 |
| No log | 7.0 | 119 | 0.1469 | 0.6412 | 0.7458 | 0.6895 | 0.9621 |
| No log | 8.0 | 136 | 0.1463 | 0.6516 | 0.7418 | 0.6938 | 0.9633 |
| No log | 9.0 | 153 | 0.1446 | 0.6664 | 0.7527 | 0.7069 | 0.9643 |
| No log | 10.0 | 170 | 0.1447 | 0.6579 | 0.7646 | 0.7072 | 0.9641 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
J001/mt5-ch-en-v1
|
J001
| 2023-05-09T17:01:40Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-05-09T13:53:50Z |
---
tags:
- translation
- generated_from_trainer
model-index:
- name: mt5-ch-en-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-ch-en-v1
This model is a fine-tuned version of [IDEA-CCNL/Randeng-T5-77M](https://huggingface.co/IDEA-CCNL/Randeng-T5-77M) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 60
- eval_batch_size: 60
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
reyhanemyr/distilbert-base-uncased-finetuned-recruitment-exp
|
reyhanemyr
| 2023-05-09T16:57:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-09T16:51:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-recruitment-exp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-recruitment-exp
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1742
- Precision: 0.6204
- Recall: 0.6855
- F1: 0.6513
- Accuracy: 0.9561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 17 | 0.5203 | 0.0 | 0.0 | 0.0 | 0.8991 |
| No log | 2.0 | 34 | 0.3797 | 0.2979 | 0.0277 | 0.0507 | 0.9030 |
| No log | 3.0 | 51 | 0.2983 | 0.3171 | 0.4194 | 0.3612 | 0.9222 |
| No log | 4.0 | 68 | 0.2321 | 0.4219 | 0.4916 | 0.4541 | 0.9375 |
| No log | 5.0 | 85 | 0.2100 | 0.5076 | 0.5262 | 0.5168 | 0.9453 |
| No log | 6.0 | 102 | 0.1899 | 0.5174 | 0.5885 | 0.5507 | 0.9506 |
| No log | 7.0 | 119 | 0.1775 | 0.5395 | 0.6350 | 0.5834 | 0.9509 |
| No log | 8.0 | 136 | 0.1817 | 0.6282 | 0.6617 | 0.6445 | 0.9550 |
| No log | 9.0 | 153 | 0.1775 | 0.6262 | 0.6726 | 0.6485 | 0.9558 |
| No log | 10.0 | 170 | 0.1742 | 0.6204 | 0.6855 | 0.6513 | 0.9561 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dxli/bear_plushie
|
dxli
| 2023-05-09T16:52:25Z | 18 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T07:24:41Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/bear_plushie
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
vldnechai/LunarLander_v2_PPO
|
vldnechai
| 2023-05-09T16:52:12Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-09T15:30:17Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -37.32 +/- 13.17
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.001
'num_envs': 32
'num_steps': 2048
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 128
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'vldnechai/LunarLander_v2_PPO'
'batch_size': 65536
'minibatch_size': 512}
```
|
zeyefkey/q-Taxi-v3.2
|
zeyefkey
| 2023-05-09T16:51:42Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-09T16:48:51Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3.2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zeyefkey/q-Taxi-v3.2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tastytoast/ppo-LunarLander-v2
|
tastytoast
| 2023-05-09T16:48:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-09T16:48:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.98 +/- 28.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
reyhanemyr/roberta-base-finetuned-recruitment-exp
|
reyhanemyr
| 2023-05-09T16:46:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-09T16:35:51Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-finetuned-recruitment-exp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-recruitment-exp
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1044
- Precision: 0.7320
- Recall: 0.8560
- F1: 0.7892
- Accuracy: 0.9713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 17 | 0.4051 | 0.25 | 0.0061 | 0.0119 | 0.8984 |
| No log | 2.0 | 34 | 0.2450 | 0.4040 | 0.3732 | 0.3880 | 0.9280 |
| No log | 3.0 | 51 | 0.1481 | 0.5385 | 0.6663 | 0.5956 | 0.9555 |
| No log | 4.0 | 68 | 0.1269 | 0.6295 | 0.7961 | 0.7031 | 0.964 |
| No log | 5.0 | 85 | 0.1101 | 0.6639 | 0.8235 | 0.7352 | 0.9679 |
| No log | 6.0 | 102 | 0.1116 | 0.7287 | 0.7819 | 0.7544 | 0.9701 |
| No log | 7.0 | 119 | 0.1160 | 0.7026 | 0.8266 | 0.7596 | 0.9684 |
| No log | 8.0 | 136 | 0.1071 | 0.7442 | 0.8499 | 0.7936 | 0.9717 |
| No log | 9.0 | 153 | 0.1044 | 0.7320 | 0.8560 | 0.7892 | 0.9713 |
| No log | 10.0 | 170 | 0.1081 | 0.7532 | 0.8448 | 0.7964 | 0.9722 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
VictorGil75/autotrain-rm-soccer_class-56881131860
|
VictorGil75
| 2023-05-09T16:45:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:VictorGil75/autotrain-data-rm-soccer_class",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-09T16:43:58Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- VictorGil75/autotrain-data-rm-soccer_class
co2_eq_emissions:
emissions: 0.4133097011272339
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 56881131860
- CO2 Emissions (in grams): 0.4133
## Validation Metrics
- Loss: 0.064
- Accuracy: 0.985
- Precision: 0.990
- Recall: 0.980
- AUC: 0.995
- F1: 0.985
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/VictorGil75/autotrain-rm-soccer_class-56881131860
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("VictorGil75/autotrain-rm-soccer_class-56881131860", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("VictorGil75/autotrain-rm-soccer_class-56881131860", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
JustFrederik/sugoi-v4-ja-en-ct2-int8
|
JustFrederik
| 2023-05-09T16:36:28Z | 1 | 0 |
transformers
|
[
"transformers",
"translation",
"ja",
"en",
"license:unknown",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-05-09T16:15:55Z |
---
license: unknown
language:
- ja
- en
pipeline_tag: translation
---
https://sugoitranslator.com
<br />
https://blog.sugoitranslator.com
<br />
https://www.patreon.com/mingshiba
<br />
```
ct2-fairseq-converter --model_path big.pretrain.pt --data_dir . --source_lang ja --target_lang en --quantization int8 --output_dir ../converted/sugoi-v4-ja-en-ct2-int8
```
|
JustFrederik/jparacrawl-v3-small-ct2-float16
|
JustFrederik
| 2023-05-09T16:31:05Z | 0 | 0 | null |
[
"translation",
"ja",
"en",
"license:unknown",
"region:us"
] |
translation
| 2023-05-09T16:05:43Z |
---
license: unknown
language:
- ja
- en
pipeline_tag: translation
---
https://www.kecl.ntt.co.jp/icl/lirg/jparacrawl/
<br />
```
ct2-fairseq-converter --model_path small.pretrain.pt --data_dir . --source_lang en --target_lang ja --quantization float16 --output_dir ../converted/jparacrawl-v3-small-ct2-float16/en-ja
```
```
ct2-fairseq-converter --model_path ./small/small.pretrain.pt --data_dir ./small --source_lang ja --target_lang en --quantization float16 --output_dir ../converted/jparacrawl-v3-small-ct2-float16/ja-en
```
|
JustFrederik/jparacrawl-v3-big-ct2-int8
|
JustFrederik
| 2023-05-09T16:30:02Z | 0 | 0 | null |
[
"translation",
"ja",
"en",
"license:unknown",
"region:us"
] |
translation
| 2023-05-09T15:58:33Z |
---
license: unknown
language:
- ja
- en
pipeline_tag: translation
---
https://www.kecl.ntt.co.jp/icl/lirg/jparacrawl/
<br />
```
ct2-fairseq-converter --model_path big.pretrain.pt --data_dir . --source_lang en --target_lang ja --quantization int8 --output_dir ../converted/jparacrawl-v3-big-ct2-int8/en-ja
```
```
ct2-fairseq-converter --model_path ./big/big.pretrain.pt --data_dir ./big --source_lang ja --target_lang en --quantization int8 --output_dir ../converted/jparacrawl-v3-big-ct2-int8/ja-en
```
|
Luca77/a2c-AntBulletEnv-v0
|
Luca77
| 2023-05-09T16:24:19Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-01T14:22:14Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1506.49 +/- 61.06
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hr-elrond/autotrain-p2_finbert_training_100-56875131853
|
hr-elrond
| 2023-05-09T16:23:24Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:hr-elrond/autotrain-data-p2_finbert_training_100",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-09T16:22:38Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- hr-elrond/autotrain-data-p2_finbert_training_100
co2_eq_emissions:
emissions: 0.2967273355715001
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 56875131853
- CO2 Emissions (in grams): 0.2967
## Validation Metrics
- Loss: 0.068
- Accuracy: 0.984
- Precision: 0.993
- Recall: 0.983
- AUC: 0.996
- F1: 0.988
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/hr-elrond/autotrain-p2_finbert_training_100-56875131853
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("hr-elrond/autotrain-p2_finbert_training_100-56875131853", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("hr-elrond/autotrain-p2_finbert_training_100-56875131853", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-5
|
xinyixiuxiu
| 2023-05-09T16:21:49Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-09T15:45:59Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-5
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0332
- Train Accuracy: 0.9897
- Validation Loss: 0.1438
- Validation Accuracy: 0.9599
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0332 | 0.9897 | 0.1438 | 0.9599 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.7.0
- Datasets 2.10.1
- Tokenizers 0.12.1
|
eason0203/swin-tiny-patch4-window7-224-finetuned-arty
|
eason0203
| 2023-05-09T16:07:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-09T15:12:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-arty
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-arty
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2386 | 0.43 | 50 | 0.0643 | 0.9967 |
| 0.0359 | 0.87 | 100 | 0.0035 | 0.9996 |
| 0.058 | 1.3 | 150 | 0.0015 | 0.9996 |
| 0.0297 | 1.74 | 200 | 0.0003 | 1.0 |
| 0.0175 | 2.17 | 250 | 0.0002 | 1.0 |
| 0.0166 | 2.6 | 300 | 0.0002 | 1.0 |
| 0.0318 | 3.04 | 350 | 0.0001 | 1.0 |
| 0.0062 | 3.47 | 400 | 0.0002 | 1.0 |
| 0.0101 | 3.9 | 450 | 0.0002 | 1.0 |
| 0.0066 | 4.34 | 500 | 0.0002 | 1.0 |
| 0.005 | 4.77 | 550 | 0.0002 | 1.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CSHaitao/SAILER_zh
|
CSHaitao
| 2023-05-09T16:06:33Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"arxiv:2304.11370",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-05-09T15:16:36Z |
---
license: mit
---
SAILER is a structure-aware pre-trained language model. It is highlighted in the following three aspects:
- SAILER fully utilizes the structural information contained in legal case documents and pays more attention to key legal elements, similar to how legal experts browse legal case documents.
- SAILER employs an asymmetric encoder-decoder architecture to integrate several different pre-training objectives. In this way, rich semantic information across tasks is encoded into dense vectors.
- SAILER has powerful discriminative ability, even without any legal annotation data. It can distinguish legal cases with different charges accurately.
SAILER_zh pre-training on Chinese criminal law legal case documents
If you find our work useful, please do not save your star and cite our work:
```
@misc{SAILER,
title={SAILER: Structure-aware Pre-trained Language Model for Legal Case Retrieval},
author={Haitao Li and Qingyao Ai and Jia Chen and Qian Dong and Yueyue Wu and Yiqun Liu and Chong Chen and Qi Tian},
year={2023},
eprint={2304.11370},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
|
judithrosell/sa_english_new
|
judithrosell
| 2023-05-09T16:03:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-09T14:19:18Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: sa_english_new
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9394
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_english_new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3371
- Accuracy: 0.9394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.244 | 1.0 | 1563 | 0.2231 | 0.9151 |
| 0.1826 | 2.0 | 3126 | 0.2054 | 0.9396 |
| 0.1196 | 3.0 | 4689 | 0.2671 | 0.9350 |
| 0.0769 | 4.0 | 6252 | 0.2950 | 0.9399 |
| 0.0455 | 5.0 | 7815 | 0.3371 | 0.9394 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dxli/backpack_dog
|
dxli
| 2023-05-09T16:00:22Z | 25 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T06:40:33Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/backpack_dog
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
dxli/candle
|
dxli
| 2023-05-09T15:57:07Z | 15 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T15:03:41Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/candle
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
Purus15987/Summarization_model
|
Purus15987
| 2023-05-09T15:56:11Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-28T05:40:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: Summarization_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1392
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summarization_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5515
- Rouge1: 0.1392
- Rouge2: 0.0503
- Rougel: 0.1161
- Rougelsum: 0.1159
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8419 | 0.1272 | 0.0393 | 0.108 | 0.1079 | 19.0 |
| No log | 2.0 | 124 | 2.6329 | 0.1333 | 0.0458 | 0.1133 | 0.1131 | 19.0 |
| No log | 3.0 | 186 | 2.5693 | 0.1379 | 0.0494 | 0.1164 | 0.1162 | 19.0 |
| No log | 4.0 | 248 | 2.5515 | 0.1392 | 0.0503 | 0.1161 | 0.1159 | 19.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
reyhanemyr/roberta-base-finetuned-scientific-exp
|
reyhanemyr
| 2023-05-09T15:48:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-09T15:38:15Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-finetuned-scientific-exp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-scientific-exp
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1255
- Precision: 0.7662
- Recall: 0.7484
- F1: 0.7572
- Accuracy: 0.9674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 81 | 0.2172 | 0.6049 | 0.5180 | 0.5581 | 0.9433 |
| No log | 2.0 | 162 | 0.1470 | 0.7556 | 0.6469 | 0.6970 | 0.9582 |
| No log | 3.0 | 243 | 0.1255 | 0.7662 | 0.7484 | 0.7572 | 0.9674 |
| No log | 4.0 | 324 | 0.1261 | 0.7546 | 0.7738 | 0.7641 | 0.9666 |
| No log | 5.0 | 405 | 0.1339 | 0.7184 | 0.8414 | 0.7751 | 0.9635 |
| No log | 6.0 | 486 | 0.1350 | 0.7112 | 0.8330 | 0.7673 | 0.9627 |
| 0.1498 | 7.0 | 567 | 0.1362 | 0.7471 | 0.8309 | 0.7868 | 0.9693 |
| 0.1498 | 8.0 | 648 | 0.1530 | 0.7174 | 0.8266 | 0.7682 | 0.9644 |
| 0.1498 | 9.0 | 729 | 0.1587 | 0.7392 | 0.8330 | 0.7833 | 0.9655 |
| 0.1498 | 10.0 | 810 | 0.1610 | 0.7416 | 0.8372 | 0.7865 | 0.9651 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rethem-expeditecommerce/MiniLM-L6-GPL
|
rethem-expeditecommerce
| 2023-05-09T15:46:31Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-05T15:17:37Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
|
dxli/dog8
|
dxli
| 2023-05-09T15:32:48Z | 20 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T05:28:32Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/dog8
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
dxli/backpack
|
dxli
| 2023-05-09T15:17:25Z | 32 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T06:00:30Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/backpack
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
labicquette/Reinforce-Cartpole-v1
|
labicquette
| 2023-05-09T15:08:06Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-09T15:07:58Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Khayoon/ppo-Huggy
|
Khayoon
| 2023-05-09T15:07:52Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-09T15:07:45Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: Khayoon/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
agomberto/trocr-base-printed-fr
|
agomberto
| 2023-05-09T15:02:25Z | 66 | 2 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"trocr",
"image-to-text",
"fr",
"arxiv:2109.10282",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-05-04T08:39:53Z |
---
license: mit
language:
- fr
pipeline_tag: image-to-text
tags:
- trocr
- vision-encoder-decoder
metrics:
- cer
- wer
widget:
- src: >-
https://raw.githubusercontent.com/agombert/trocr-base-printed-fr/main/sample_imgs/3.jpg
example_title: Example 1
- src: >-
https://raw.githubusercontent.com/agombert/trocr-base-printed-fr/main/sample_imgs/0.jpg
example_title: Example 2
- src: >-
https://raw.githubusercontent.com/agombert/trocr-base-printed-fr/main/sample_imgs/1.jpg
example_title: Example 3
---
# TrOCR for French
## Overview
TrOCR has not yet released for French, so we trained a French model for PoC purpose. Based on this model, it is recommended to collect more data to additionally train the 1st stage or perform fine-tuning as the 2nd stage.
It's a special case of the [English trOCR model](https://huggingface.co/microsoft/trocr-base-printed) introduced in the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/trocr)
This was possible thanks to [daekun-ml](https://huggingface.co/daekeun-ml/ko-trocr-base-nsmc-news-chatbot) and [Niels Rogge](https://github.com/NielsRogge/) than enabled us to publish this model with their tutorials and code.
## Collecting data
### Text data
We created training data of ~723k examples by taking random samples of the following datasets:
- [MultiLegalPile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) - 90k
- [French book Reviews](https://huggingface.co/datasets/Abirate/french_book_reviews) - 20k
- [WikiNeural](https://huggingface.co/datasets/Babelscape/wikineural) - 83k
- [Multilingual cc news](https://huggingface.co/datasets/intfloat/multilingual_cc_news) - 119k
- [Reviews Amazon Multi](https://huggingface.co/datasets/amazon_reviews_multi) - 153k
- [Opus Book](https://huggingface.co/datasets/opus_books) - 70k
- [BerlinText](https://huggingface.co/datasets/biglam/berlin_state_library_ocr) - 38k
We collected parts of each of the datasets and then cut randomly the sentences to collect the final training set.
### Image Data
Image data was generated with TextRecognitionDataGenerator (https://github.com/Belval/TextRecognitionDataGenerator) introduced in the TrOCR paper.
Below is a code snippet for generating images.
```shell
python3 ./trdg/run.py -i ocr_dataset_poc.txt -w 5 -t {num_cores} -f 64 -l ko -c {num_samples} -na 2 --output_dir {dataset_dir}
```
## Training
### Base model
The encoder model used `facebook/deit-base-distilled-patch16-384` and the decoder model used `camembert-base`. It is easier than training by starting weights from `microsoft/trocr-base-stage1`.
### Parameters
We used heuristic parameters without separate hyperparameter tuning.
- learning_rate = 4e-5
- epochs = 25
- fp16 = True
- max_length = 32
### Results on dev set
For the dev set we got those results
- size of the test set: 72k examples
- CER: 0.13
- WER: 0.26
- Val Loss: 0.424
## Usage
```python
from transformers import TrOCRProcessor, VisionEncoderDecoderModel, AutoTokenizer
import requests
from io import BytesIO
from PIL import Image
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten")
model = VisionEncoderDecoderModel.from_pretrained("agomberto/trocr-base-printed-fr")
tokenizer = AutoTokenizer.from_pretrained("agomberto/trocr-base-printed-fr")
url = "https://github.com/agombert/trocr-base-printed-fr/blob/main/sample_imgs/0.jpg"
response = requests.get(url)
img = Image.open(BytesIO(response.content))
pixel_values = processor(img, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values, max_length=32)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
```
All the code required for data collection and model training has been published on the author's Github.
- https://github.com/agombert/trocr-base-printed-fr/
|
togethercomputer/RedPajama-INCITE-Base-3B-v1
|
togethercomputer
| 2023-05-09T14:59:20Z | 3,772 | 90 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-04T05:51:02Z |
---
license: apache-2.0
language:
- en
datasets:
- togethercomputer/RedPajama-Data-1T
---
# RedPajama-INCITE-Base-3B-v1
RedPajama-INCITE-Base-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
The training was done on 3,072 V100 GPUs provided as part of the INCITE 2023 project on Scalable Foundation Models for Transferrable Generalist AI, awarded to MILA, LAION, and EleutherAI in fall 2022, with support from the Oak Ridge Leadership Computing Facility (OLCF) and INCITE program.
- Base Model: [RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1)
- Instruction-tuned Version: [RedPajama-INCITE-Instruct-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1)
- Chat Version: [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1)
## Model Details
- **Developed by**: Together Computer.
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 2.8B parameter pretrained language model.
# Quick Start
Please note that the model requires `transformers` version >= 4.25.1.
## GPU Inference
This requires a GPU with 8GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
prompt = "Alan Turing is"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True,
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
a name that has been synonymous with the computer age since the 1950s. The British mathematician, logician, and cryptanalyst is widely regarded as the father of modern computing. His contributions to the development of the modern computer and the theory of computation have had a profound impact on the world we live in today.
Turing’s contributions to the development of the modern computer were made in the 1940s and 1950s. He is most famous for his work on the Turing machine, a theoretical model of a computing machine that was able to perform all the mathematical operations of a computer. Turing’s work on the...
"""
```
## GPU Inference in Int8
To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
```bash
pip install accelerate
pip install bitsandbytes
```
Then you can run inference with int8 as follows:
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
# infer
prompt = "Alan Turing is"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
the man who cracked the Enigma code during World War II, and who was later convicted of homosexual acts. He was a brilliant mathematician, and a visionary who foresaw the computer age....
"""
```
## CPU Inference
You can run inference on CPU as follows:
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", torch_dtype=torch.bfloat16)
# infer
prompt = "Alan Turing is"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
a name that is synonymous with the history of computer science. As the man who invented the Turing machine, the mathematical model that defines the limits of what can be computed, Turing is credited with the invention of the modern computer. Turing was also a mathematician and logician, and his work in these fields led to the development of the field of artificial intelligence...
"""
```
Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference.
# Uses
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
#### Out-of-Scope Use
`RedPajama-INCITE-Base-3B-v1` is a language model and may not perform well for other use cases outside of its intended scope.
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
It is important to consider the limitations of the model and to only use it for its intended purpose.
#### Misuse and Malicious Use
`RedPajama-INCITE-Base-3B-v1` is designed for language modeling.
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating fake news, misinformation, or propaganda
- Promoting hate speech, discrimination, or violence against individuals or groups
- Impersonating individuals or organizations without their consent
- Engaging in cyberbullying or harassment
- Defamatory content
- Spamming or scamming
- Sharing confidential or sensitive information without proper authorization
- Violating the terms of use of the model or the data used to train it
- Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
## Limitations
`RedPajama-INCITE-Base-3B-v1`, like other language models, has limitations that should be taken into consideration.
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
## Training
**Training Data**
Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
**Training Procedure**
- **Hardware:** 256 nodes of 6xV100 (IBM Power9), on the OLCF Summit cluster
- **Optimizer:** Apex FusedAdam
- **Parallelism:** Pipeline parallel 6, tensor parallel 2
- **Gradient Accumulations**: 8 (global batch size 4M tokens)
- **Num of Tokens:** 800B Tokens
- **Learning rate:** 0.00016
## Benchmark
Please refer to our [blog post](https://together.xyz) for benchmark results.
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
|
dmitry-vorobiev/rubert_ria_headlines
|
dmitry-vorobiev
| 2023-05-09T14:56:55Z | 329 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"summarization",
"bert",
"rubert",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- ru
tags:
- summarization
- bert
- rubert
license: mit
---
# rubert_ria_headlines
## Description
*bert2bert* model, initialized with the `DeepPavlov/rubert-base-cased` pretrained weights and
fine-tuned on the first 99% of ["Rossiya Segodnya" news dataset](https://github.com/RossiyaSegodnya/ria_news_dataset) for 2 epochs.
## Usage example
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
MODEL_NAME = "dmitry-vorobiev/rubert_ria_headlines"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME)
text = "Скопируйте текст статьи / новости"
encoded_batch = tokenizer.prepare_seq2seq_batch(
[text],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512)
output_ids = model.generate(
input_ids=encoded_batch["input_ids"],
max_length=36,
no_repeat_ngram_size=3,
num_beams=5,
top_k=0
)
headline = tokenizer.decode(output_ids[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=False)
print(headline)
```
## Datasets
- [ria_news](https://github.com/RossiyaSegodnya/ria_news_dataset)
## How it was trained?
I used free TPUv3 on kaggle. The model was trained for 3 epochs with effective batch size 192 and soft restarts (warmup steps 1500 / 500 / 500 with new optimizer state on each epoch start).
- [1 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53254694)
- [2 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53269040)
- [3 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53280797)
Common train params:
```shell
export XLA_USE_BF16=1
export XLA_TENSOR_ALLOCATOR_MAXSIZE=100000000
python nlp_headline_rus/src/train_seq2seq.py \
--do_train \
--tie_encoder_decoder \
--max_source_length 512 \
--max_target_length 32 \
--val_max_target_length 48 \
--tpu_num_cores 8 \
--per_device_train_batch_size 24 \
--gradient_accumulation_steps 1 \
--learning_rate 5e-4 \
--adam_epsilon 1e-6 \
--weight_decay 1e-5 \
```
## Validation results
- Using [last 1% of ria](https://drive.google.com/drive/folders/1ztAeyb1BiLMgXwOgOJS7WMR4PGiI1q92) dataset
- Using [gazeta_ru test](https://drive.google.com/drive/folders/1CyowuRpecsLTcDbqEfmAvkCWOod58g_e) split
- Using [gazeta_ru val](https://drive.google.com/drive/folders/1XZFOXHSXLKdhzm61ceVLw3aautrdskIu) split
|
Nakul24/SM_Bot
|
Nakul24
| 2023-05-09T14:33:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-09-23T05:14:27Z |
---
tags:
- conversational
---
|
can34/Modill
|
can34
| 2023-05-09T14:33:36Z | 3 | 0 |
diffusers
|
[
"diffusers",
"art",
"text-to-image",
"region:us"
] |
text-to-image
| 2023-05-09T13:35:18Z |
---
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
---
🔥Declaration: I suggest illustrators and arts cannot be replaced by AI, although these models can accelerate design/drawing, the details, sprite-inside, visual-logics cannot be Datafication in Neural Networks.
Modill (Modern-Illustration) is a trained checkpoint to make attractive and creative illustrations/painting. I’am UI/UX designer, so I want to a model to generate some flat illustrations for both business and creative-design.
🔥Advantages:
Modill is trained by 289 brilliant illustrations from different designers/illustrators. It can draw exaggerated-body characters and raster texture.
No strictly restrictions on style. The train data includes different styles, no-overfitting can generate more special outputs.
🔥Recommendations of parameters:
Sampler: DPM2 Karras, 20~40 steps.
CFG Scale: 7-9.
Resolutions: 512*512
Negatives: poorly lit, duplicated leg, no text, one shoe, multiple head, strange face , error head, missing hand, blur, stereopsis, sex, waterpoint
*** Add some Style Lora models might generate great arts. ***
-------------------------------------------------------------------------------
|
Althhecow/BlushyandSpicy_v4
|
Althhecow
| 2023-05-09T14:29:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-05-09T14:11:23Z |
Model made using ~100 images from Blushy&Spicy on Twitter
Available as both LORA and full models
MeinaMix recommended but not required.
|
ighina/roberta_wikidisease_topseg_topsam
|
ighina
| 2023-05-09T14:02:21Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-09T14:01:32Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6033 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
danazoid/ppo-LunarLander-v2
|
danazoid
| 2023-05-09T13:59:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-09T13:29:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.70 +/- 41.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ChrisOfLondon/ppo-LunarLander-v2
|
ChrisOfLondon
| 2023-05-09T13:42:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-09T13:42:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.03 +/- 11.99
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kinshuk-h/t5-kelm-tekgen-kg-small-finetuned
|
kinshuk-h
| 2023-05-09T13:38:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"legal",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-09T13:37:46Z |
---
license: mit
language:
- en
pipeline_tag: text2text-generation
tags:
- legal
---
# t5-kelm-tekgen-kg-small-finetuned
Google's T5 model ([t5-small](https://huggingface.co/t5-small)) finetuned over KELM-TEKGEN KG triples for link prediction.
|
kinshuk-h/flan-t5-kelm-tekgen-kg-small-finetuned
|
kinshuk-h
| 2023-05-09T13:36:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"legal",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-09T13:35:42Z |
---
license: mit
language:
- en
pipeline_tag: text2text-generation
tags:
- legal
---
# flan-t5-kelm-tekgen-kg-small-finetuned
Google's Flan T5 model ([flan-t5-small](https://huggingface.co/google/flan-t5-small)) finetuned over KELM-TEKGEN KG triples for link prediction.
|
Jilas/Nar
|
Jilas
| 2023-05-09T13:08:08Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-2.0",
"region:us"
] | null | 2023-05-09T13:08:08Z |
---
license: cc-by-nc-sa-2.0
---
|
wiesmpas/testmodelforkiex
|
wiesmpas
| 2023-05-09T12:50:18Z | 5 | 0 |
tf-keras
|
[
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] |
image-classification
| 2023-05-09T12:48:34Z |
---
pipeline_tag: image-classification
---
|
AlienKevin/ipa_ocr
|
AlienKevin
| 2023-05-09T12:45:04Z | 0 | 0 | null |
[
"tensorboard",
"image-to-text",
"zh",
"license:mit",
"region:us"
] |
image-to-text
| 2023-05-08T13:43:03Z |
---
license: mit
language:
- zh
pipeline_tag: image-to-text
---
# Target: Convert Scanned Images of IPA symbols to Pinyin
Scanned images of IPA phonetic symbols for Chengdunese (成都话) in The Great Dictionary of Modern Chinese Dialects (現代漢語方言大詞典).
# Training and Test Set
* 2,553 images of IPA phonetic symbols generated from Pinyin pronunciations found in Sichuanese Dialect Dictionary (四川方言词典 教你一口地道的四川话) and the word list of the Shupin (蜀拼) input method.
* 80/20 split on train/test
# Results
* Trained for 180 steps with a batch size of 32
* Final Character Error Rate of 0.795% on test set
* TODO: label part of the scanned images to see if model generalizes on target task
|
alexandrualexandru/my-final-v1-text-to-sparql-combined-dataset-t5-base-2023-05-09_09-13
|
alexandrualexandru
| 2023-05-09T12:33:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-09T09:17:02Z |
---
tags:
- generated_from_trainer
model-index:
- name: my-final-v1-text-to-sparql-combined-dataset-t5-base-2023-05-09_09-13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-final-v1-text-to-sparql-combined-dataset-t5-base-2023-05-09_09-13
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3456
- Gen Len: 19.0
- Bertscorer-p: 0.5013
- Bertscorer-r: 0.1137
- Bertscorer-f1: 0.3000
- Sacrebleu-score: 6.1003
- Sacrebleu-precisions: [77.97754754552538, 64.74142628270293, 53.3199157675034, 47.63691495511611]
- Bleu-bp: 0.1019
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:----------------------------------------------------------------------------:|:-------:|
| 0.4215 | 1.0 | 7822 | 0.3919 | 19.0 | 0.4997 | 0.1122 | 0.2984 | 5.8699 | [77.35323282257656, 63.16682990532158, 51.41608735111668, 45.63668646835748] | 0.1009 |
| 0.3639 | 2.0 | 15644 | 0.3456 | 19.0 | 0.5013 | 0.1137 | 0.3000 | 6.1003 | [77.97754754552538, 64.74142628270293, 53.3199157675034, 47.63691495511611] | 0.1019 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
directtt/wine-reviews-distilbert
|
directtt
| 2023-05-09T10:46:50Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-09T09:11:41Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: wine-reviews-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# wine-reviews-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3834
- Train Acc: 0.8375
- Validation Loss: 0.5538
- Validation Acc: 0.7741
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 24455, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Acc | Validation Loss | Validation Acc | Epoch |
|:----------:|:---------:|:---------------:|:--------------:|:-----:|
| 0.6005 | 0.7381 | 0.5342 | 0.7661 | 0 |
| 0.4822 | 0.7915 | 0.5570 | 0.7612 | 1 |
| 0.3834 | 0.8375 | 0.5538 | 0.7741 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
SaberMolaei/speecht5_tts_ckb
|
SaberMolaei
| 2023-05-09T10:21:25Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"hf-tts-leaderboard",
"generated_from_trainer",
"ckb",
"dataset:mozilla-foundation/common_voice_11_0",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-05-09T08:14:35Z |
---
language:
- ckb
tags:
- hf-tts-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: SpeechT5 TTS ckb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS ckb
This model is a fine-tuned version of [microsoft/speecht5_tts - Saber Molaei](https://huggingface.co/microsoft/speecht5_tts - Saber Molaei) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5811 | 7.41 | 1000 | 0.5423 |
| 0.5511 | 14.81 | 2000 | 0.5267 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DIAG-PSSeng/cicero-gpt2
|
DIAG-PSSeng
| 2023-05-09T09:45:04Z | 12 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"it",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-08T20:03:43Z |
---
license: openrail
language:
- it
metrics:
- perplexity
---
# cicero-gpt2
<!-- Provide a quick summary of what the model is/does. -->
GroNLP/gpt2-small-italian version fine-tuned with italian civil judgments.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Marco Calamo, Francesca De Luzi, Mattia Macrì, Tommaso Mencattini, Massimo Mecella
- **Model type:** gpt2-small-italian
- **Language(s) (NLP):** italian
- **License:** openrail
- **Finetuned from model:** [GroNLP/gpt-2-small](https://huggingface.co/GroNLP/gpt2-small-italian)
-
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [Github](https://github.com/MattiaMacri/Cicero)
- **Paper [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Used to generate part of sentences based upon user input. All sensible data are hidden by design.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
babs001seye/distilbert-base-uncased-finetuned-squad
|
babs001seye
| 2023-05-09T09:18:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-05T13:42:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aitslab/biobert_huner_chemical_v1
|
aitslab
| 2023-05-09T09:11:07Z | 9 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"arxiv:2304.07805",
"doi:10.57967/hf/2033",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-12T16:11:33Z |
---
license: apache-2.0
---
More information can be found in our github repo and paper. Please cite the paper, if you use the model.
https://github.com/Aitslab/EasyNER
@article{ahmed2023easyner,
title={EasyNER: A Customizable Easy-to-Use Pipeline for Deep Learning- and Dictionary-based Named Entity Recognition from Medical Text},
author={Rafsan Ahmed and Petter Berntsson and Alexander Skafte and Salma Kazemi Rashed and Marcus Klang and Adam Barvesten and Ola Olde and William Lindholm and Antton Lamarca Arrizabalaga and Pierre Nugues and Sonja Aits},
year={2023},
eprint={2304.07805},
archivePrefix={arXiv},
primaryClass={q-bio.QM}
}
|
aitslab/biobert_huner_gene_v1
|
aitslab
| 2023-05-09T09:10:42Z | 127 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"arxiv:2304.07805",
"doi:10.57967/hf/2031",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-12T16:14:52Z |
---
license: apache-2.0
---
More information can be found in our github repo and paper. Please cite the paper, if you use the model.
https://github.com/Aitslab/EasyNER
@article{ahmed2023easyner,
title={EasyNER: A Customizable Easy-to-Use Pipeline for Deep Learning- and Dictionary-based Named Entity Recognition from Medical Text},
author={Rafsan Ahmed and Petter Berntsson and Alexander Skafte and Salma Kazemi Rashed and Marcus Klang and Adam Barvesten and Ola Olde and William Lindholm and Antton Lamarca Arrizabalaga and Pierre Nugues and Sonja Aits},
year={2023},
eprint={2304.07805},
archivePrefix={arXiv},
primaryClass={q-bio.QM}
}
|
aitslab/biobert_huner_disease_v1
|
aitslab
| 2023-05-09T09:10:16Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"arxiv:2304.07805",
"doi:10.57967/hf/2034",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-12T16:13:23Z |
---
license: apache-2.0
---
More information can be found in our github repo and paper. Please cite the paper, if you use the model.
https://github.com/Aitslab/EasyNER
@article{ahmed2023easyner,
title={EasyNER: A Customizable Easy-to-Use Pipeline for Deep Learning- and Dictionary-based Named Entity Recognition from Medical Text},
author={Rafsan Ahmed and Petter Berntsson and Alexander Skafte and Salma Kazemi Rashed and Marcus Klang and Adam Barvesten and Ola Olde and William Lindholm and Antton Lamarca Arrizabalaga and Pierre Nugues and Sonja Aits},
year={2023},
eprint={2304.07805},
archivePrefix={arXiv},
primaryClass={q-bio.QM}
}
|
aitslab/biobert_huner_species_v1
|
aitslab
| 2023-05-09T09:09:35Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"arxiv:2304.07805",
"doi:10.57967/hf/2032",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-12T16:16:19Z |
---
license: apache-2.0
---
More information can be found in our github repo and paper. Please cite the paper, if you use the model.
https://github.com/Aitslab/EasyNER
@article{ahmed2023easyner,
title={EasyNER: A Customizable Easy-to-Use Pipeline for Deep Learning- and Dictionary-based Named Entity Recognition from Medical Text},
author={Rafsan Ahmed and Petter Berntsson and Alexander Skafte and Salma Kazemi Rashed and Marcus Klang and Adam Barvesten and Ola Olde and William Lindholm and Antton Lamarca Arrizabalaga and Pierre Nugues and Sonja Aits},
year={2023},
eprint={2304.07805},
archivePrefix={arXiv},
primaryClass={q-bio.QM}
}
|
dxli/dog2
|
dxli
| 2023-05-09T09:08:16Z | 12 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T08:26:09Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/dog2
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
hitachi-nlp/roberta-base_last-4-chars_acl2023
|
hitachi-nlp
| 2023-05-09T09:03:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-09T07:13:40Z |
---
license: cc-by-nc-sa-4.0
datasets:
- wikipedia
- bookcorpus
language:
- en
library_name: transformers
pipeline_tag: fill-mask
---
This is the pretrained model of Last 4 Chars. Please refer to our [GitHub](https://github.com/hitachi-nlp/mlm-probe-acl2023) page for more details.
|
hitachi-nlp/roberta-base_last-5-chars_acl2023
|
hitachi-nlp
| 2023-05-09T09:02:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-09T07:14:03Z |
---
license: cc-by-nc-sa-4.0
datasets:
- wikipedia
- bookcorpus
language:
- en
library_name: transformers
pipeline_tag: fill-mask
---
This is the pretrained model of Last 5 Chars. Please refer to our [GitHub](https://github.com/hitachi-nlp/mlm-probe-acl2023) page for more details.
|
hitachi-nlp/roberta-base_last-9-chars_acl2023
|
hitachi-nlp
| 2023-05-09T09:00:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-09T07:14:27Z |
---
license: cc-by-nc-sa-4.0
datasets:
- wikipedia
- bookcorpus
language:
- en
library_name: transformers
pipeline_tag: fill-mask
---
This is the pretrained model of Last 9 Chars. Please refer to our [GitHub](https://github.com/hitachi-nlp/mlm-probe-acl2023) page for more details.
|
hitachi-nlp/roberta-base_last-2-chars_acl2023
|
hitachi-nlp
| 2023-05-09T08:59:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-09T07:12:56Z |
---
license: cc-by-nc-sa-4.0
datasets:
- wikipedia
- bookcorpus
language:
- en
library_name: transformers
pipeline_tag: fill-mask
---
This is the pretrained model of Last 2 Chars. Please refer to our [GitHub](https://github.com/hitachi-nlp/mlm-probe-acl2023) page for more details.
|
dxli/grey_sloth_plushie
|
dxli
| 2023-05-09T08:55:25Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T08:00:48Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/grey_sloth_plushie
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
FatihC/swin-tiny-patch4-window7-224-finetuned-eurosat-people
|
FatihC
| 2023-05-09T08:51:40Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-09T08:18:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat-people
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: images
split: train
args: images
metrics:
- name: Accuracy
type: accuracy
value: 0.952
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat-people
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1711
- Accuracy: 0.952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 0.3073 | 0.912 |
| No log | 2.0 | 8 | 0.2076 | 0.92 |
| 0.4055 | 3.0 | 12 | 0.1789 | 0.928 |
| 0.4055 | 4.0 | 16 | 0.1911 | 0.928 |
| 0.3045 | 5.0 | 20 | 0.1695 | 0.928 |
| 0.3045 | 6.0 | 24 | 0.1756 | 0.944 |
| 0.3045 | 7.0 | 28 | 0.1751 | 0.944 |
| 0.2419 | 8.0 | 32 | 0.1727 | 0.944 |
| 0.2419 | 9.0 | 36 | 0.1711 | 0.952 |
| 0.2375 | 10.0 | 40 | 0.1711 | 0.952 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
funasr/fsmn-vad-onnx
|
funasr
| 2023-05-09T08:50:58Z | 0 | 17 | null |
[
"onnx",
"FunASR",
"FSMN-VAD",
"voice-activity-detection",
"arxiv:1803.05030",
"license:apache-2.0",
"region:us"
] |
voice-activity-detection
| 2023-04-22T13:55:01Z |
---
license: apache-2.0
pipeline_tag: voice-activity-detection
tags:
- FunASR
- FSMN-VAD
---
## Introduce
Voice activity detection (VAD) plays a important role in speech recognition systems by detecting the beginning and end of effective speech. FunASR provides an efficient VAD model based on the [FSMN structure](https://arxiv.org/abs/1803.05030). To improve model discrimination, we use monophones as modeling units, given the relatively rich speech information. During inference, the VAD system requires post-processing for improved robustness, including operations such as threshold settings and sliding windows.
This repository demonstrates how to leverage FSMN-VAD in conjunction with the funasr_onnx runtime. The underlying model is derived from [FunASR](https://github.com/alibaba-damo-academy/FunASR), which was trained on a massive 5,000-hour dataset.
We have relesed numerous industrial-grade models, including speech recognition, voice activity detection, punctuation restoration, speaker verification, speaker diarization, and timestamp prediction (force alignment). To learn more about these models, kindly refer to the [documentation](https://alibaba-damo-academy.github.io/FunASR/en/index.html) available on FunASR. If you are interested in leveraging advanced AI technology for your speech-related projects, we invite you to explore the possibilities offered by [FunASR](https://github.com/alibaba-damo-academy/FunASR).
## Install funasr_onnx
```shell
pip install -U funasr_onnx
# For the users in China, you could install with the command:
# pip install -U funasr_onnx -i https://mirror.sjtu.edu.cn/pypi/web/simple
```
## Download the model
```shell
git lfs install
git clone https://huggingface.co/funasr/FSMN-VAD
```
## Inference with runtime
### Voice Activity Detection
#### FSMN-VAD
```python
from funasr_onnx import Fsmn_vad
model_dir = "./FSMN-VAD"
model = Fsmn_vad(model_dir, quantize=True)
wav_path = "./FSMN-VAD/asr_example.wav"
result = model(wav_path)
print(result)
```
- `model_dir`: the model path, which contains `model.onnx`, `config.yaml`, `am.mvn`
- `batch_size`: `1` (Default), the batch size duration inference
- `device_id`: `-1` (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
- `quantize`: `False` (Default), load the model of `model.onnx` in `model_dir`. If set `True`, load the model of `model_quant.onnx` in `model_dir`
- `intra_op_num_threads`: `4` (Default), sets the number of threads used for intraop parallelism on CPU
Input: wav formt file, support formats: `str, np.ndarray, List[str]`
Output: `List[str]`: recognition result
## Citations
``` bibtex
@inproceedings{gao2022paraformer,
title={Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition},
author={Gao, Zhifu and Zhang, Shiliang and McLoughlin, Ian and Yan, Zhijie},
booktitle={INTERSPEECH},
year={2022}
}
```
|
rimOPS/embeddings
|
rimOPS
| 2023-05-09T08:47:03Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-16T03:58:57Z |
---
license: creativeml-openrail-m
---
|
fengbj/test-llm
|
fengbj
| 2023-05-09T08:35:26Z | 0 | 0 |
transformers
|
[
"transformers",
"text-to-speech",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-05-09T07:06:03Z |
---
license: mit
pipeline_tag: text-to-speech
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dxli/dog7
|
dxli
| 2023-05-09T08:32:36Z | 21 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T07:47:26Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/dog7
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
DmitriyVasiliev/autotrain-xls-mt5-rua-par-rua-sent-dia-56800131755
|
DmitriyVasiliev
| 2023-05-09T08:29:51Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:DmitriyVasiliev/autotrain-data-xls-mt5-rua-par-rua-sent-dia",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-05-09T08:14:15Z |
---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- DmitriyVasiliev/autotrain-data-xls-mt5-rua-par-rua-sent-dia
co2_eq_emissions:
emissions: 5.948993226966507
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 56800131755
- CO2 Emissions (in grams): 5.9490
## Validation Metrics
- Loss: 1.627
- Rouge1: 4.517
- Rouge2: 1.694
- RougeL: 4.556
- RougeLsum: 4.550
- Gen Len: 29.800
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/DmitriyVasiliev/autotrain-xls-mt5-rua-par-rua-sent-dia-56800131755
```
|
kasunw/PPO-LunarLander-v2
|
kasunw
| 2023-05-09T08:25:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-09T08:25:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.94 +/- 24.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
photel/ppo-LunarLander-v2
|
photel
| 2023-05-09T08:15:40Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-09T08:15:19Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.54 +/- 24.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dxli/fancy_boot
|
dxli
| 2023-05-09T08:00:35Z | 5 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T07:06:16Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/fancy_boot
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
rohan1221/inpaint-furniture
|
rohan1221
| 2023-05-09T07:58:29Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-08T17:38:49Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### inpaint_furniture Dreambooth model trained by rohan1221 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
dxli/teapot
|
dxli
| 2023-05-09T07:49:41Z | 28 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T07:10:43Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/teapot
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
christianutama/ppo-LunarLander-v2
|
christianutama
| 2023-05-09T07:49:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-09T07:48:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.02 +/- 19.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Oscar-chen/roberta-base
|
Oscar-chen
| 2023-05-09T07:48:54Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-04T07:19:48Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1131
- Accuracy: 0.9637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 100 | 0.3406 | 0.8619 |
| No log | 2.0 | 200 | 0.2220 | 0.9119 |
| No log | 3.0 | 300 | 0.1429 | 0.9487 |
| No log | 4.0 | 400 | 0.1131 | 0.9637 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
djkcyl/DDSP-SVC
|
djkcyl
| 2023-05-09T07:46:43Z | 0 | 2 | null |
[
"svc",
"audio-to-audio",
"zh",
"license:agpl-3.0",
"region:us"
] |
audio-to-audio
| 2023-05-08T17:56:31Z |
---
license: agpl-3.0
language:
- zh
pipeline_tag: audio-to-audio
tags:
- svc
---
# DDSP-SVC 3.0 一键包
source: https://github.com/yxlllc/DDSP-SVC
password: DDSP@60
|
Team-PIXEL/pixel-tiny-continuous
|
Team-PIXEL
| 2023-05-09T07:39:56Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pixel",
"masked-auto-encoding",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-05-02T21:13:43Z |
---
tags:
- masked-auto-encoding
- generated_from_trainer
model-index:
- name: pixel-tiny-cont
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-tiny-cont
This model was trained from scratch on the wikipedia + bookcorpus dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 1024
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 250000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.7411 | 0.06 | 1000 | 0.9070 |
| 0.7395 | 0.12 | 2000 | 0.9064 |
| 0.7387 | 0.18 | 3000 | 0.9047 |
| 0.7382 | 0.25 | 4000 | 0.9015 |
| 0.7381 | 0.31 | 5000 | 0.9044 |
| 0.7379 | 0.37 | 6000 | 0.9042 |
| 0.7379 | 0.43 | 7000 | 0.9054 |
| 0.7378 | 0.49 | 8000 | 0.9035 |
| 0.7378 | 0.55 | 9000 | 0.9026 |
| 0.7371 | 0.61 | 10000 | 0.9038 |
| 0.7369 | 0.67 | 11000 | 0.9027 |
| 0.7368 | 0.74 | 12000 | 0.9022 |
| 0.7368 | 0.8 | 13000 | 0.8987 |
| 0.7374 | 0.86 | 14000 | 0.9014 |
| 0.7369 | 0.92 | 15000 | 0.9002 |
| 0.7369 | 0.98 | 16000 | 0.9002 |
| 0.7372 | 1.04 | 17000 | 0.9019 |
| 0.737 | 1.1 | 18000 | 0.9001 |
| 0.737 | 1.16 | 19000 | 0.9006 |
| 0.7369 | 1.23 | 20000 | 0.9007 |
| 0.7365 | 1.29 | 21000 | 0.8698 |
| 0.7363 | 1.35 | 22000 | 0.8700 |
| 0.7366 | 1.41 | 23000 | 0.9021 |
| 0.7362 | 1.47 | 24000 | 0.8763 |
| 0.7082 | 1.53 | 25000 | 0.8719 |
| 0.6774 | 1.59 | 26000 | 0.8876 |
| 0.6525 | 1.65 | 27000 | 0.8905 |
| 0.6022 | 1.72 | 28000 | 0.8856 |
| 0.5874 | 1.78 | 29000 | 0.8794 |
| 0.5765 | 1.84 | 30000 | 0.8806 |
| 0.5685 | 1.9 | 31000 | 0.8747 |
| 0.564 | 1.96 | 32000 | 0.8779 |
| 0.5606 | 2.02 | 33000 | 0.8762 |
| 0.5574 | 2.08 | 34000 | 0.8703 |
| 0.5528 | 2.14 | 35000 | 0.8664 |
| 0.5494 | 2.21 | 36000 | 0.8717 |
| 0.5448 | 2.27 | 37000 | 0.8673 |
| 0.5419 | 2.33 | 38000 | 0.8637 |
| 0.5385 | 2.39 | 39000 | 0.8634 |
| 0.536 | 2.45 | 40000 | 0.8661 |
| 0.5336 | 2.51 | 41000 | 0.8631 |
| 0.5316 | 2.57 | 42000 | 0.8606 |
| 0.5297 | 2.63 | 43000 | 0.8589 |
| 0.5305 | 2.7 | 44000 | 0.8570 |
| 0.5262 | 2.76 | 45000 | 0.8559 |
| 0.5247 | 2.82 | 46000 | 0.8634 |
| 0.5235 | 2.88 | 47000 | 0.8606 |
| 0.5227 | 2.94 | 48000 | 0.8610 |
| 0.5206 | 3.0 | 49000 | 0.8610 |
| 0.5194 | 3.06 | 50000 | 0.8611 |
| 0.5183 | 3.12 | 51000 | 0.8579 |
| 0.5175 | 3.19 | 52000 | 0.8598 |
| 0.5163 | 3.25 | 53000 | 0.8521 |
| 0.5156 | 3.31 | 54000 | 0.8550 |
| 0.5148 | 3.37 | 55000 | 0.8504 |
| 0.5139 | 3.43 | 56000 | 0.8530 |
| 0.5133 | 3.49 | 57000 | 0.8589 |
| 0.5126 | 3.55 | 58000 | 0.8561 |
| 0.5119 | 3.62 | 59000 | 0.8574 |
| 0.5127 | 3.68 | 60000 | 0.8624 |
| 0.5105 | 3.74 | 61000 | 0.8522 |
| 0.5099 | 3.8 | 62000 | 0.8550 |
| 0.5094 | 3.86 | 63000 | 0.8537 |
| 0.509 | 3.92 | 64000 | 0.8535 |
| 0.5091 | 3.98 | 65000 | 0.8592 |
| 0.5079 | 4.04 | 66000 | 0.8554 |
| 0.5074 | 4.11 | 67000 | 0.8516 |
| 0.5069 | 4.17 | 68000 | 0.8491 |
| 0.5066 | 4.23 | 69000 | 0.8571 |
| 0.5068 | 4.29 | 70000 | 0.8536 |
| 0.5066 | 4.35 | 71000 | 0.9288 |
| 0.5051 | 4.41 | 72000 | 0.8597 |
| 0.5045 | 4.47 | 73000 | 0.8555 |
| 0.5043 | 4.53 | 74000 | 0.8547 |
| 0.5039 | 4.6 | 75000 | 0.8561 |
| 0.504 | 4.66 | 76000 | 0.8541 |
| 0.5026 | 4.72 | 77000 | 0.8490 |
| 0.5024 | 4.78 | 78000 | 0.8499 |
| 0.5019 | 4.84 | 79000 | 0.8522 |
| 0.5014 | 4.9 | 80000 | 0.8508 |
| 0.5008 | 4.96 | 81000 | 0.8512 |
| 0.5002 | 5.02 | 82000 | 0.8470 |
| 0.4995 | 5.09 | 83000 | 0.8462 |
| 0.4991 | 5.15 | 84000 | 0.8455 |
| 0.4982 | 5.21 | 85000 | 0.8465 |
| 0.4978 | 5.27 | 86000 | 0.8434 |
| 0.4969 | 5.33 | 87000 | 0.8432 |
| 0.4964 | 5.39 | 88000 | 0.8417 |
| 0.4957 | 5.45 | 89000 | 0.8363 |
| 0.495 | 5.51 | 90000 | 0.8392 |
| 0.4946 | 5.58 | 91000 | 0.8401 |
| 0.4935 | 5.64 | 92000 | 0.8373 |
| 0.4929 | 5.7 | 93000 | 0.8401 |
| 0.492 | 5.76 | 94000 | 0.8356 |
| 0.4912 | 5.82 | 95000 | 0.8334 |
| 0.4904 | 5.88 | 96000 | 0.8281 |
| 0.4898 | 5.94 | 97000 | 0.8338 |
| 0.4891 | 6.0 | 98000 | 0.8300 |
| 0.4882 | 6.07 | 99000 | 0.8262 |
| 0.4876 | 6.13 | 100000 | 0.8172 |
| 0.4868 | 6.19 | 101000 | 0.8240 |
| 0.4861 | 6.25 | 102000 | 0.8212 |
| 0.4854 | 6.31 | 103000 | 0.8243 |
| 0.4847 | 6.37 | 104000 | 0.8228 |
| 0.4841 | 6.43 | 105000 | 0.8185 |
| 0.4837 | 6.5 | 106000 | 0.8177 |
| 0.4827 | 6.56 | 107000 | 0.8140 |
| 0.4819 | 6.62 | 108000 | 0.8147 |
| 0.4813 | 6.68 | 109000 | 0.8172 |
| 0.4807 | 6.74 | 110000 | 0.8149 |
| 0.4801 | 6.8 | 111000 | 0.8152 |
| 0.4792 | 6.86 | 112000 | 0.8089 |
| 0.4785 | 6.92 | 113000 | 0.8084 |
| 0.4777 | 6.99 | 114000 | 0.8103 |
| 0.477 | 7.05 | 115000 | 0.8104 |
| 0.4772 | 7.11 | 116000 | 0.8142 |
| 0.4754 | 7.17 | 117000 | 0.8159 |
| 0.4748 | 7.23 | 118000 | 0.8092 |
| 0.4738 | 7.29 | 119000 | 0.8036 |
| 0.473 | 7.35 | 120000 | 0.8085 |
| 0.4724 | 7.41 | 121000 | 0.8084 |
| 0.4714 | 7.48 | 122000 | 0.8066 |
| 0.4705 | 7.54 | 123000 | 0.8094 |
| 0.4699 | 7.6 | 124000 | 0.8095 |
| 0.4693 | 7.66 | 125000 | 0.8101 |
| 0.4685 | 7.72 | 126000 | 0.8092 |
| 0.4679 | 7.78 | 127000 | 0.8025 |
| 0.4672 | 7.84 | 128000 | 0.8000 |
| 0.4665 | 7.9 | 129000 | 0.8020 |
| 0.4659 | 7.97 | 130000 | 0.8022 |
| 0.4653 | 8.03 | 131000 | 0.8071 |
| 0.4647 | 8.09 | 132000 | 0.7994 |
| 0.4639 | 8.15 | 133000 | 0.8034 |
| 0.4634 | 8.21 | 134000 | 0.8022 |
| 0.4656 | 8.27 | 135000 | 0.8052 |
| 0.4623 | 8.33 | 136000 | 0.7989 |
| 0.4617 | 8.39 | 137000 | 0.7993 |
| 0.4612 | 8.46 | 138000 | 0.8003 |
| 0.4608 | 8.52 | 139000 | 0.7990 |
| 0.4603 | 8.58 | 140000 | 0.8074 |
| 0.4597 | 8.64 | 141000 | 0.8089 |
| 0.4591 | 8.7 | 142000 | 0.8040 |
| 0.4586 | 8.76 | 143000 | 0.7993 |
| 0.4584 | 8.82 | 144000 | 0.8004 |
| 0.4594 | 8.88 | 145000 | 0.7991 |
| 0.4574 | 8.95 | 146000 | 0.7956 |
| 0.4571 | 9.01 | 147000 | 0.7948 |
| 0.4565 | 9.07 | 148000 | 0.7982 |
| 0.4563 | 9.13 | 149000 | 0.7960 |
| 0.4555 | 9.19 | 150000 | 0.8043 |
| 0.4551 | 9.25 | 151000 | 0.8021 |
| 0.4549 | 9.31 | 152000 | 0.7972 |
| 0.4545 | 9.38 | 153000 | 0.8003 |
| 0.4542 | 9.44 | 154000 | 0.8000 |
| 0.4539 | 9.5 | 155000 | 0.7960 |
| 0.4533 | 9.56 | 156000 | 0.8035 |
| 0.453 | 9.62 | 157000 | 0.7953 |
| 0.4527 | 9.68 | 158000 | 0.7937 |
| 0.4524 | 9.74 | 159000 | 0.8021 |
| 0.4519 | 9.8 | 160000 | 0.8028 |
| 0.4517 | 9.87 | 161000 | 0.8006 |
| 0.4514 | 9.93 | 162000 | 0.8067 |
| 0.4512 | 9.99 | 163000 | 0.7990 |
| 0.4508 | 10.05 | 164000 | 0.8041 |
| 0.4504 | 10.11 | 165000 | 0.7995 |
| 0.4501 | 10.17 | 166000 | 0.7979 |
| 0.4499 | 10.23 | 167000 | 0.7969 |
| 0.4497 | 10.29 | 168000 | 0.8041 |
| 0.4495 | 10.36 | 169000 | 0.8050 |
| 0.4492 | 10.42 | 170000 | 0.7999 |
| 0.4494 | 10.48 | 171000 | 0.7992 |
| 0.4486 | 10.54 | 172000 | 0.8019 |
| 0.4485 | 10.6 | 173000 | 0.8026 |
| 0.4483 | 10.66 | 174000 | 0.8009 |
| 0.448 | 10.72 | 175000 | 0.8022 |
| 0.4479 | 10.78 | 176000 | 0.8016 |
| 0.4476 | 10.85 | 177000 | 0.7988 |
| 0.4474 | 10.91 | 178000 | 0.8025 |
| 0.4471 | 10.97 | 179000 | 0.8035 |
| 0.4471 | 11.03 | 180000 | 0.7983 |
| 0.4467 | 11.09 | 181000 | 0.8010 |
| 0.4463 | 11.15 | 182000 | 0.8035 |
| 0.4463 | 11.21 | 183000 | 0.8049 |
| 0.4462 | 11.27 | 184000 | 0.7998 |
| 0.4459 | 11.34 | 185000 | 0.7988 |
| 0.4457 | 11.4 | 186000 | 0.8064 |
| 0.4456 | 11.46 | 187000 | 0.8042 |
| 0.4454 | 11.52 | 188000 | 0.7998 |
| 0.4453 | 11.58 | 189000 | 0.8026 |
| 0.4449 | 11.64 | 190000 | 0.7993 |
| 0.4448 | 11.7 | 191000 | 0.8037 |
| 0.4448 | 11.76 | 192000 | 0.8038 |
| 0.4445 | 11.83 | 193000 | 0.8010 |
| 0.4442 | 11.89 | 194000 | 0.7977 |
| 0.4443 | 11.95 | 195000 | 0.8008 |
| 0.4441 | 12.01 | 196000 | 0.8048 |
| 0.4439 | 12.07 | 197000 | 0.8034 |
| 0.4438 | 12.13 | 198000 | 0.8052 |
| 0.4437 | 12.19 | 199000 | 0.8041 |
| 0.4434 | 12.25 | 200000 | 0.8001 |
| 0.4434 | 12.32 | 201000 | 0.8013 |
| 0.4432 | 12.38 | 202000 | 0.7987 |
| 0.443 | 12.44 | 203000 | 0.7962 |
| 0.443 | 12.5 | 204000 | 0.8017 |
| 0.4429 | 12.56 | 205000 | 0.7996 |
| 0.4428 | 12.62 | 206000 | 0.7997 |
| 0.4425 | 12.68 | 207000 | 0.8017 |
| 0.4424 | 12.75 | 208000 | 0.8008 |
| 0.4424 | 12.81 | 209000 | 0.8052 |
| 0.4422 | 12.87 | 210000 | 0.8004 |
| 0.4421 | 12.93 | 211000 | 0.8023 |
| 0.4421 | 12.99 | 212000 | 0.8014 |
| 0.442 | 13.05 | 213000 | 0.7999 |
| 0.4418 | 13.11 | 214000 | 0.8019 |
| 0.4417 | 13.17 | 215000 | 0.7996 |
| 0.4416 | 13.24 | 216000 | 0.8007 |
| 0.4414 | 13.3 | 217000 | 0.8029 |
| 0.4415 | 13.36 | 218000 | 0.7990 |
| 0.4413 | 13.42 | 219000 | 0.7997 |
| 0.4413 | 13.48 | 220000 | 0.7997 |
| 0.4412 | 13.54 | 221000 | 0.7996 |
| 0.4411 | 13.6 | 222000 | 0.8003 |
| 0.4411 | 13.66 | 223000 | 0.7993 |
| 0.4411 | 13.73 | 224000 | 0.8005 |
| 0.4409 | 13.79 | 225000 | 0.8013 |
| 0.4409 | 13.85 | 226000 | 0.8016 |
| 0.4409 | 13.91 | 227000 | 0.7994 |
| 0.4408 | 13.97 | 228000 | 0.8023 |
| 0.4407 | 14.03 | 229000 | 0.8013 |
| 0.4406 | 14.09 | 230000 | 0.8038 |
| 0.4408 | 14.15 | 231000 | 0.7994 |
| 0.4406 | 14.22 | 232000 | 0.8007 |
| 0.4404 | 14.28 | 233000 | 0.8006 |
| 0.4403 | 14.34 | 234000 | 0.7987 |
| 0.4405 | 14.4 | 235000 | 0.8010 |
| 0.4404 | 14.46 | 236000 | 0.7982 |
| 0.4404 | 14.52 | 237000 | 0.7985 |
| 0.4403 | 14.58 | 238000 | 0.8016 |
| 0.4402 | 14.64 | 239000 | 0.8025 |
| 0.4402 | 14.71 | 240000 | 0.8020 |
| 0.4401 | 14.77 | 241000 | 0.8009 |
| 0.4401 | 14.83 | 242000 | 0.8015 |
| 0.4401 | 14.89 | 243000 | 0.8010 |
| 0.44 | 14.95 | 244000 | 0.7996 |
| 0.4402 | 15.01 | 245000 | 0.8014 |
| 0.44 | 15.07 | 246000 | 0.8007 |
| 0.44 | 15.13 | 247000 | 0.7984 |
| 0.44 | 15.2 | 248000 | 0.8009 |
| 0.4399 | 15.26 | 249000 | 0.8006 |
| 0.4399 | 15.32 | 250000 | 0.8016 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.1.1.dev0
- Tokenizers 0.12.1
|
thuyentruong/a2c-AntBulletEnv-v0
|
thuyentruong
| 2023-05-09T07:30:49Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-09T07:29:46Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1741.60 +/- 107.57
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-3
|
xinyixiuxiu
| 2023-05-09T07:25:13Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-09T06:49:25Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-3
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0713
- Train Accuracy: 0.9771
- Validation Loss: 0.1705
- Validation Accuracy: 0.9541
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0713 | 0.9771 | 0.1705 | 0.9541 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.7.0
- Datasets 2.10.1
- Tokenizers 0.12.1
|
dxli/robot_toy
|
dxli
| 2023-05-09T07:10:12Z | 13 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T06:26:06Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/robot_toy
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
dxli/dog5
|
dxli
| 2023-05-09T07:05:13Z | 21 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T06:11:23Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/dog5
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
dxli/pink_sunglasses
|
dxli
| 2023-05-09T07:03:57Z | 5 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T06:21:03Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/pink_sunglasses
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
gan11/q-FrozenLake-v1-4x4-noSlippery
|
gan11
| 2023-05-09T07:01:21Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-09T07:01:18Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="gan11/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
zxy1231/tm_simcse_zh_model
|
zxy1231
| 2023-05-09T06:59:09Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-09T06:50:51Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 313 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 500,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Cynthiaiii4/Text_classification_model_bbc
|
Cynthiaiii4
| 2023-05-09T06:57:06Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-09T06:52:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Text_classification_model_bbc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text_classification_model_bbc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6851
- Accuracy: 0.78
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 100 | 0.6159 | 0.795 |
| No log | 2.0 | 200 | 0.6851 | 0.78 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dxli/clock
|
dxli
| 2023-05-09T06:55:26Z | 5 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-09T06:08:29Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dxli/clock
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
Renfeld/gabon
|
Renfeld
| 2023-05-09T06:53:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-05-09T06:53:25Z |
# ⚠️ Type of model/library unknown.
# Feel free to open a Pull request
# for integration of the huggingface model hub
# into the corresponding library =)
|
xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-2
|
xinyixiuxiu
| 2023-05-09T06:37:59Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-09T06:02:09Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1049
- Train Accuracy: 0.9641
- Validation Loss: 0.1328
- Validation Accuracy: 0.9564
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1049 | 0.9641 | 0.1328 | 0.9564 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.7.0
- Datasets 2.10.1
- Tokenizers 0.12.1
|
keldenl/RedPajama-INCITE-Chat-7B-v0.1-GGML
|
keldenl
| 2023-05-09T06:36:35Z | 5 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:OpenAssistant/oasst1",
"dataset:databricks/databricks-dolly-15k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-09T05:23:18Z |
---
license: apache-2.0
language:
- en
datasets:
- togethercomputer/RedPajama-Data-1T
- OpenAssistant/oasst1
- databricks/databricks-dolly-15k
widget:
- text: "<human>: Write an email to my friends inviting them to come to my home on Friday for a dinner party, bring their own food to share.\n<bot>:"
example_title: "Email Writing"
- text: "<human>: Create a list of things to do in San Francisco\n<bot>:"
example_title: "Brainstorming"
inference:
parameters:
temperature: 0.7
top_p: 0.7
top_k: 50
max_new_tokens: 128
---
# RedPajama-INCITE-Chat-7B-v0.1
RedPajama-INCITE-Chat-7B-v0.1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
It is fine-tuned on OASST1 and Dolly2 to enhance chatting ability.
## Model Details
- **Developed by**: Together Computer.
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 6.9B parameter pretrained language model.
# Quick Start
Please note that the model requires `transformers` version >= 4.25.1.
To prompt the chat model, use the following format:
```
<human>: [Instruction]
<bot>:
```
## GPU Inference
This requires a GPU with 16GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Alan Mathison Turing (23 June 1912 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, mathematician, and theoretical biologist.
"""
```
## GPU Inference in Int8
This requires a GPU with 12GB memory.
To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
```bash
pip install accelerate
pip install bitsandbytes
```
Then you can run inference with int8 as follows:
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Alan Mathison Turing (23 June 1912 – 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, and theoretical biologist.
"""
```
## CPU Inference
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", torch_dtype=torch.bfloat16)
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Alan Mathison Turing, OBE, FRS, (23 June 1912 – 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, and theoretical biologist.
"""
```
Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference.
# Uses
## Direct Use
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
#### Out-of-Scope Use
`RedPajama-INCITE-Chat-7B-v0.1` is a language model and may not perform well for other use cases outside of its intended scope.
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
It is important to consider the limitations of the model and to only use it for its intended purpose.
#### Misuse and Malicious Use
`RedPajama-INCITE-Chat-7B-v0.1` is designed for language modeling.
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating fake news, misinformation, or propaganda
- Promoting hate speech, discrimination, or violence against individuals or groups
- Impersonating individuals or organizations without their consent
- Engaging in cyberbullying or harassment
- Defamatory content
- Spamming or scamming
- Sharing confidential or sensitive information without proper authorization
- Violating the terms of use of the model or the data used to train it
- Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
## Limitations
`RedPajama-INCITE-Chat-7B-v0.1`, like other language models, has limitations that should be taken into consideration.
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
## Training
**Training Data**
Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
**Training Procedure**
- **Hardware:** 8 A100
- **Optimizer:** Adam
- **Gradient Accumulations**: 1
- **Num of Tokens:** 131M tokens
- **Learning rate:** 1e-5
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
|
metarank/esci-MiniLM-L6-v2
|
metarank
| 2023-05-09T06:23:38Z | 28 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"onnx",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-04-03T22:13:38Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# metarank/esci-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
A [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model fine-tuned on
[Amazon ESCI dataset](https://github.com/amazon-science/esci-data).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('metarank/esci-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 769 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
* Roman Grebennikov
|
Cynthiaiii4/Text_classification_bert-base-uncased
|
Cynthiaiii4
| 2023-05-09T06:18:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-09T06:14:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Text_classification_bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text_classification_bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4491
- Accuracy: 0.79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 100 | 0.5127 | 0.78 |
| No log | 2.0 | 200 | 0.4491 | 0.79 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.