modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 06:27:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 06:27:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
joydeeph/ppo-LunarLander-v2 | joydeeph | 2023-06-28T08:42:25Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-28T08:41:59Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.93 +/- 21.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PhilSad/poca-SoccerTwos | PhilSad | 2023-06-28T08:36:16Z | 2 | 0 | ml-agents | [
"ml-agents",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-06-28T08:10:30Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: PhilSad/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kesslya1/F1_Model_Ownmodel | kesslya1 | 2023-06-28T08:33:19Z | 5 | 0 | keras | [
"keras",
"tf-keras",
"image-classification",
"region:us"
]
| image-classification | 2023-06-28T08:30:21Z | ---
metrics:
- accuracy
library_name: keras
pipeline_tag: image-classification
--- |
MU-NLPC/calc-baseline-t5-xl | MU-NLPC | 2023-06-28T08:10:50Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-06-24T09:44:44Z | This is a baseline model for our [calculator-assisted models](https://huggingface.co/models?search=emnlp2023)
trained on a mixture of all our [Calc-X datasets](https://huggingface.co/datasets?search=emnlp2023).
See the corresponding paper for details.
The reported results can be reproduced by using [evaluation script](https://github.com/emnlp2023sub/gadgets/blob/65e24e810cf5ea20aceb8a3c8ddbc19f035ab694/examples/test_calc.py)
from the project repository. |
MU-NLPC/calc-baseline-t5-large | MU-NLPC | 2023-06-28T08:09:37Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-06-23T15:31:51Z |
This is a baseline model for our [calculator-assisted models](https://huggingface.co/models?search=emnlp2023)
trained on a mixture of all our [Calc-X datasets](https://huggingface.co/datasets?search=emnlp2023).
See the corresponding paper for details.
The reported results can be reproduced by using [evaluation script](https://github.com/emnlp2023sub/gadgets/blob/65e24e810cf5ea20aceb8a3c8ddbc19f035ab694/examples/test_calc.py)
from the project repository. |
Assem-Ihab/trainingthemodel3 | Assem-Ihab | 2023-06-28T07:53:07Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-06-28T07:39:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: trainingthemodel3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainingthemodel3
This model is a fine-tuned version of [abdalrahmanshahrour/AraBART-summ](https://huggingface.co/abdalrahmanshahrour/AraBART-summ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6417
- Rouge1: 0.1136
- Rouge2: 0.0429
- Rougel: 0.0938
- Rougelsum: 0.0936
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 31 | 2.9297 | 0.1141 | 0.0449 | 0.0941 | 0.0942 | 20.0 |
| No log | 2.0 | 62 | 2.7345 | 0.1099 | 0.0426 | 0.0908 | 0.0908 | 20.0 |
| No log | 3.0 | 93 | 2.6680 | 0.1123 | 0.0428 | 0.093 | 0.0929 | 20.0 |
| No log | 4.0 | 124 | 2.6417 | 0.1136 | 0.0429 | 0.0938 | 0.0936 | 20.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Broonion/RLcourse-unit2-Taxi-V3 | Broonion | 2023-06-28T07:40:07Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-28T07:30:55Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: RLcourse-unit2-Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Broonion/RLcourse-unit2-Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
swardiantara/drone-sentiment | swardiantara | 2023-06-28T07:25:12Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-04-17T06:24:08Z | ---
license: mit
language:
- en
metrics:
- accuracy
- precision
- recall
- f1
pipeline_tag: text-classification
widget:
- text: "Battery temperature is below 15 degrees Celsius. Warm up the battery temperature to above 25 degree Celsius to ensure a safe flight."
example_title: "Negative Sentiment"
- text: "Aircraft is returning to the Home Point. Minimum RTH Altitude is 30m. You can reset the RTH Altitude in Remote Controller Settings after cancelling RTH if necessary."
example_title: "Positive Sentiment"
--- |
swardiantara/drone-term-extractor | swardiantara | 2023-06-28T07:22:35Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"drone",
"drone forensics",
"named entity recognition",
"en",
"license:gpl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-05-23T09:37:03Z | ---
license: gpl
language:
- en
metrics:
- accuracy
- precision
- recall
- f1
pipeline_tag: token-classification
tags:
- drone
- drone forensics
- named entity recognition
widget:
- text: "Compass abnormal. Solution: 1. Ensure there are no magnets or metal objects near the aircraft. The ground or walls may contain metal. Move away from sources of interference before attempting flight. 2. Calibrate Compass Before Takeoff"
example_title: "Example 1"
- text: "The flight attitude angle is larger in Sport mode. The gimbal will rotate when the aircraft starts or stops. Use Normal mode if required for stable shooting"
example_title: "Example 2"
- text: "Motor speed error. Land or return to home promptly. After powering off the aircraft, replace the propeller on the beeping ESC. If the issue persists, contact DJI Support"
example_title: "Example 3"
- text: "GPS signal low. Aircraft unable to auto hover and takeoff restricted. Move to environment with adequate light. Unlocking takeoff restrictions not recommended"
example_title: "Example 4"
--- |
dhillondheeraj84/elephants_yolov8 | dhillondheeraj84 | 2023-06-28T07:22:35Z | 0 | 0 | null | [
"object-detection",
"arxiv:1910.09700",
"region:us"
]
| object-detection | 2023-06-13T08:35:51Z | ---
pipeline_tag: object-detection
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shubham09/falcon_p2 | Shubham09 | 2023-06-28T07:22:31Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-06-28T07:12:50Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
kejolong/etomisa | kejolong | 2023-06-28T07:21:52Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-06-28T04:29:02Z | ---
license: creativeml-openrail-m
---
|
YakovElm/MariaDB_10_BERT_Under_Sampling | YakovElm | 2023-06-28T07:09:43Z | 52 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-28T07:09:06Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB_10_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB_10_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0013
- Train Accuracy: 1.0
- Validation Loss: 0.3394
- Validation Accuracy: 0.9523
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0724 | 0.9958 | 0.2766 | 0.9523 | 0 |
| 0.0024 | 1.0 | 0.3180 | 0.9523 | 1 |
| 0.0013 | 1.0 | 0.3394 | 0.9523 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Broonion/RLcourse-unit2-q-FrozenLake-v1-4x4-noSlippery | Broonion | 2023-06-28T06:43:48Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-28T06:43:46Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Broonion/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Sidharthkr/MPT-7b-chat-GGML | Sidharthkr | 2023-06-28T06:20:54Z | 0 | 1 | null | [
"region:us"
]
| null | 2023-06-28T05:41:12Z | Compatibilty
These files are not compatible with llama.cpp.
Currently they can be used with:
KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: KoboldCpp
The ctransformers Python library, which includes LangChain support: ctransformers
The GPT4All-UI which uses ctransformers: GPT4All-UI
rustformers' llm
The example mpt binary provided with ggml |
YakovElm/Jira_20_BERT_Under_Sampling | YakovElm | 2023-06-28T06:20:21Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-28T06:19:46Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira_20_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira_20_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0014
- Train Accuracy: 1.0
- Validation Loss: 0.4661
- Validation Accuracy: 0.9338
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1013 | 0.9780 | 0.3644 | 0.9338 | 0 |
| 0.0030 | 1.0 | 0.4356 | 0.9338 | 1 |
| 0.0014 | 1.0 | 0.4661 | 0.9338 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
alexshengzhili/llava-7bv0-mm-projector-ft-with-ocr-caption-prompted-paragraph | alexshengzhili | 2023-06-28T06:16:48Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-06-28T05:25:30Z | ---
license: mit
---
This is the feature alignment pre-training work to train only only the multi-modal projector.
"Predict" paragraph given caption, ocr and image token
|
alexshengzhili/llava-fte2e-scicap-w-mentions-390K-440MB | alexshengzhili | 2023-06-28T05:44:43Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-06-26T07:57:45Z | This model is formulated to 'predict' caption given image and mentioned paragraph
Trains on alexshengzhili/llava-SciCapplus-w-mentions[https://huggingface.co/datasets/alexshengzhili/llava-SciCapplus-w-mentions/tree/main] |
YakovElm/Jira_10_BERT_Under_Sampling | YakovElm | 2023-06-28T05:40:02Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-28T05:39:26Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira_10_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira_10_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0015
- Train Accuracy: 1.0
- Validation Loss: 3.4990
- Validation Accuracy: 0.4921
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0884 | 0.9895 | 2.6907 | 0.4921 | 0 |
| 0.0032 | 1.0 | 3.2542 | 0.4921 | 1 |
| 0.0015 | 1.0 | 3.4990 | 0.4921 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Broonion/RLcourse-unit1bonus-ppo-Huggy | Broonion | 2023-06-28T05:38:23Z | 28 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-06-28T05:38:14Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Broonion/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Poonnnnnnnn/type-prediction-transformer | Poonnnnnnnn | 2023-06-28T04:44:24Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"camembert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-05-12T07:21:22Z | ---
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: type-prediction-transformer
results: []
widget:
- text: "ถนนผุพังทำให้เกิดเสียงดังเวลารถวิ่ง"
- text: "ขี่มอไซค์บนทางเท้ามันจะเกินปุยมุ้ย"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# type-prediction-transformer
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0497
- F1: 0.8651
- Roc Auc: 0.9260
- Accuracy: 0.8208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 149 | 0.0812 | 0.8070 | 0.8677 | 0.7588 |
| No log | 2.0 | 298 | 0.0591 | 0.8585 | 0.9064 | 0.8141 |
| No log | 3.0 | 447 | 0.0493 | 0.8719 | 0.9144 | 0.8258 |
| 0.0886 | 4.0 | 596 | 0.0506 | 0.8614 | 0.9222 | 0.8090 |
| 0.0886 | 5.0 | 745 | 0.0487 | 0.8683 | 0.9255 | 0.8174 |
| 0.0886 | 6.0 | 894 | 0.0506 | 0.8693 | 0.9291 | 0.8191 |
| 0.0254 | 7.0 | 1043 | 0.0519 | 0.8619 | 0.9307 | 0.8090 |
| 0.0254 | 8.0 | 1192 | 0.0497 | 0.8651 | 0.9260 | 0.8208 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
|
YakovElm/IntelDAOS_15_BERT_Under_Sampling | YakovElm | 2023-06-28T04:39:11Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-28T04:38:33Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS_15_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS_15_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0015
- Train Accuracy: 1.0
- Validation Loss: 0.8058
- Validation Accuracy: 0.8859
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1334 | 0.9790 | 0.5877 | 0.8859 | 0 |
| 0.0037 | 1.0 | 0.7378 | 0.8859 | 1 |
| 0.0015 | 1.0 | 0.8058 | 0.8859 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hoaio/dqn-SpaceInvadersNoFrameskip-v4 | hoaio | 2023-06-28T04:12:28Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-28T04:11:52Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 572.00 +/- 100.70
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hoaio -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hoaio -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga hoaio
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
loghai/q-Taxi-v3 | loghai | 2023-06-28T04:02:55Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-28T04:02:23Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="loghai/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
YakovElm/IntelDAOS_5_BERT_Under_Sampling | YakovElm | 2023-06-28T03:56:38Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-28T03:56:02Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS_5_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS_5_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0027
- Train Accuracy: 1.0
- Validation Loss: 0.9951
- Validation Accuracy: 0.8438
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1195 | 0.9700 | 0.6261 | 0.8438 | 0 |
| 0.0096 | 1.0 | 0.8785 | 0.8438 | 1 |
| 0.0027 | 1.0 | 0.9951 | 0.8438 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
QuangHuy54/long-t5-tglobal-large-multimedia | QuangHuy54 | 2023-06-28T03:56:31Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:multi_news",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-06-27T12:17:31Z | ---
tags:
- generated_from_trainer
datasets:
- multi_news
model-index:
- name: long-t5-tglobal-large-multimedia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# long-t5-tglobal-large-multimedia
This model is a fine-tuned version of [QuangHuy54/long-t5-tglobal-large-multimedia](https://huggingface.co/QuangHuy54/long-t5-tglobal-large-multimedia) on the multi_news dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 1 | 2.1163 | 0.3333 | 0.0859 | 0.1667 | 0.1666 | 114.46 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ALPHONSE28/SEMANA10_SINTENTICOS | ALPHONSE28 | 2023-06-28T03:45:32Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-28T03:15:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: SEMANA10_SINTENTICOS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEMANA10_SINTENTICOS
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3103
- Accuracy: 0.9048
- F1: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aka7774/frog_bench | aka7774 | 2023-06-28T03:27:27Z | 0 | 3 | null | [
"region:us"
]
| null | 2023-02-22T07:21:45Z | # frog train benchmark
ハローカエル(スペルミスに気を付けよう)
## 概要
- kohya train_networkのベンチです
- サンプルのカエルを使います
- https://note.com/kohya_ss/n/nb20c5187e15a
- https://note.com/api/v2/attachments/download/e3cd9aa39e600cac51e2022eaa01a931
- 中身をこのリポジトリにコピーしてあります
- モデルはSDv1.5を使います
- https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.safetensors
- pruneしたファイルを用意してあります
## 実行
- Windows
- バッチをダウンロードして任意のディレクトリ(日本語やスペースが無いと良い)に置いて実行します
- https://huggingface.co/aka7774/frog_bench/resolve/main/frog_bench.bat
- sd-scriptsやvenvのインストールが行われます
- SDv1.5モデルのダウンロードが行われます
- 学習が行われます
- 推論が行われます sd-scripts/txt2img/ に画像を保存します
- 学習(accelerateコマンド)の所要時間が表示されます
- sd-scripts/result.txt にも保存されます
- Windows以外もしくは環境構築済みの場合
- サンプル通りに学習を実行してaccelerateコマンドにかかったtimeを計測してください
- bitsandbytesでエラーが出る場合
- --use-8bit-adamを外すといいかも
- VRAMの使用量がギリギリの場合
- batch_sizeを下げたほうが速く終わることもあるかも
## 設定
バッチファイルを編集することでいくつかの設定が出来ます。
- bypass Install CUDA Toolkit
- pytorchのlibにPATHを通すことでCUDA Toolkitのインストールを省略する
- Path to
- PythonとgitにPATHが通っていない時にフルパスで指定する
- Pythonとgit自体のインストールは別途必要
- VERS
- 1はkohya推奨バージョン(古い)
- 2は1111推奨バージョン(新しい) xformersが動かない可能性がある
- MODE
- 複数回実行したい時にインストールや学習を飛ばす
- BATCH_SIZE
- VRAMが10GB未満の時に減らす
# 変更点
- num_cpu_threads_per_process(未変更)
- 1のほうがいいらしいけどサンプルが4なのでそのまま
- learning_rate(未変更)
- 途中で仕様変更があったので1桁減らしたほうがいいらしいけどそのまま
- inference(gen_img_diffusers.py)
- 公式のオプションがbf16だったけどA100でしか動かないというエラーが出るのでfp16に変えてあります
- interactiveしなくて済むようにした
- accelerate config
- accelerate launchに引数を追加することで割愛しています
|
ALPHONSE28/SEMANA10_2 | ALPHONSE28 | 2023-06-28T03:13:42Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-27T03:55:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: SEMANA10_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEMANA10_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3581
- Accuracy: 0.88
- F1: 0.9189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
zangyuchen2008/my_awesome_eli5_clm-model | zangyuchen2008 | 2023-06-28T03:13:17Z | 169 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-06-28T03:05:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 3.8829 |
| No log | 2.0 | 6 | 3.8717 |
| No log | 3.0 | 9 | 3.8681 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Angel-Silva/beto-base-spanish-squades2-finetuned-MeIA-AnalisisDeSentimientos | Angel-Silva | 2023-06-28T03:03:12Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-28T02:12:01Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: beto-base-spanish-squades2-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beto-base-spanish-squades2-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [inigopm/beto-base-spanish-squades2](https://huggingface.co/inigopm/beto-base-spanish-squades2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1460
- F1: 0.5859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9491 | 1.0 | 1225 | 0.9712 | 0.5621 |
| 0.795 | 2.0 | 2450 | 0.9874 | 0.5760 |
| 0.5394 | 3.0 | 3675 | 1.1460 | 0.5859 |
| 0.3743 | 4.0 | 4900 | 1.3914 | 0.5836 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jdawnduan/dqn-SpaceInvadersNoFrameskip-v4 | jdawnduan | 2023-06-28T02:53:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-28T02:52:39Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 591.50 +/- 212.53
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jdawnduan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jdawnduan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jdawnduan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ayertey01/wav2vec2-large-xlsr-53-AsanteTwi-06 | ayertey01 | 2023-06-28T02:41:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-06-27T23:03:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-53-AsanteTwi-06
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_13_0
type: common_voice_13_0
config: tw
split: test
args: tw
metrics:
- name: Wer
type: wer
value: 0.5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-AsanteTwi-06
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6122
- Wer: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 9.3303 | 16.67 | 100 | 5.2842 | 1.0 |
| 2.961 | 33.33 | 200 | 3.1857 | 1.0 |
| 2.8758 | 50.0 | 300 | 2.9988 | 1.0 |
| 2.8331 | 66.67 | 400 | 2.8830 | 1.0 |
| 2.4893 | 83.33 | 500 | 2.1638 | 1.0 |
| 1.1901 | 100.0 | 600 | 0.7611 | 0.5625 |
| 0.5563 | 116.67 | 700 | 0.7503 | 0.5 |
| 0.3916 | 133.33 | 800 | 0.6324 | 0.5 |
| 0.288 | 150.0 | 900 | 0.8291 | 0.5 |
| 0.2176 | 166.67 | 1000 | 0.7383 | 0.5625 |
| 0.1814 | 183.33 | 1100 | 0.6408 | 0.5 |
| 0.1749 | 200.0 | 1200 | 0.5769 | 0.5625 |
| 0.1653 | 216.67 | 1300 | 0.6512 | 0.5 |
| 0.1301 | 233.33 | 1400 | 0.6414 | 0.4375 |
| 0.1375 | 250.0 | 1500 | 0.5970 | 0.5 |
| 0.1173 | 266.67 | 1600 | 0.6119 | 0.5 |
| 0.108 | 283.33 | 1700 | 0.6325 | 0.5 |
| 0.1183 | 300.0 | 1800 | 0.6122 | 0.5 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tyavika/Bert-CNNLSTM-QA-Pt-Squad2 | tyavika | 2023-06-28T02:39:49Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-06-28T00:02:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Bert-CNNLSTM-QA-Pt-Squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert-CNNLSTM-QA-Pt-Squad2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0798 | 1.0 | 1644 | 1.5363 |
| 1.194 | 2.0 | 3288 | 1.1882 |
| 0.7465 | 3.0 | 4932 | 1.2422 |
| 0.4822 | 4.0 | 6576 | 1.3808 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YakovElm/Apache_15_BERT_Over_Sampling | YakovElm | 2023-06-28T02:30:17Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-28T02:29:40Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache_15_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache_15_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0167
- Train Accuracy: 0.9951
- Validation Loss: 0.7266
- Validation Accuracy: 0.8892
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2926 | 0.8609 | 0.5467 | 0.8651 | 0 |
| 0.0318 | 0.9910 | 0.7866 | 0.8220 | 1 |
| 0.0167 | 0.9951 | 0.7266 | 0.8892 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
JuniorLeao/ppo-Huggy | JuniorLeao | 2023-06-28T02:18:40Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-06-28T02:18:30Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JuniorLeao/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
YakovElm/Hyperledger_10_BERT_Under_Sampling | YakovElm | 2023-06-28T01:39:49Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-28T01:39:10Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger_10_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger_10_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0004
- Train Accuracy: 1.0
- Validation Loss: 1.1748
- Validation Accuracy: 0.8600
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0466 | 0.9879 | 0.9382 | 0.8600 | 0 |
| 0.0010 | 1.0 | 1.0854 | 0.8600 | 1 |
| 0.0004 | 1.0 | 1.1748 | 0.8600 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
NjinHF/swin-tiny-patch4-window7-224-finetuned-eurosat | NjinHF | 2023-06-28T01:14:42Z | 223 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-05-08T06:03:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.977037037037037
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0731
- Accuracy: 0.9770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.247 | 1.0 | 190 | 0.1200 | 0.9626 |
| 0.2012 | 2.0 | 380 | 0.1026 | 0.9656 |
| 0.1437 | 3.0 | 570 | 0.0731 | 0.9770 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Gurumoorthy/PPO-LunarLander-v2 | Gurumoorthy | 2023-06-28T01:13:17Z | 10 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-28T00:42:31Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.63 +/- 22.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YakovElm/Hyperledger_5_BERT_Under_Sampling | YakovElm | 2023-06-28T00:42:28Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-28T00:32:39Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger_5_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger_5_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0005
- Train Accuracy: 1.0
- Validation Loss: 1.3444
- Validation Accuracy: 0.8361
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0551 | 0.9903 | 1.0475 | 0.8361 | 0 |
| 0.0012 | 1.0 | 1.2332 | 0.8361 | 1 |
| 0.0005 | 1.0 | 1.3444 | 0.8361 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cerspense/zeroscope_v2_30x448x256 | cerspense | 2023-06-28T00:28:54Z | 12 | 15 | diffusers | [
"diffusers",
"Text-to-Video",
"license:cc-by-nc-4.0",
"diffusers:TextToVideoSDPipeline",
"region:us"
]
| null | 2023-06-15T05:29:47Z | ---
tags:
- Text-to-Video
license: cc-by-nc-4.0
---

# zeroscope_v2 30x448x256
A watermark-free Modelscope-based video model optimized for producing high-quality 16:9 compositions and a smooth video output. This model was trained from the [original weights](https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis) using 9,923 clips and 29,769 tagged frames at 30 frames, 448x256 resolution.<br />
zeroscope_v2 30x448x256 is specifically designed for upscaling with [Potat1](https://huggingface.co/camenduru/potat1) using vid2vid in the [1111 text2video](https://github.com/kabachuha/sd-webui-text2video) extension by [kabachuha](https://github.com/kabachuha). Leveraging this model as a preliminary step allows for superior overall compositions at higher resolutions in Potat1, permitting faster exploration in 448x256 before transitioning to a high-resolution render. See an [example output](https://i.imgur.com/lj90FYP.mp4) that has been upscaled to 1152 x 640 using Potat1.<br />
### Using it with the 1111 text2video extension
1. Rename the file 'zeroscope_v2_30x448x256.pth' to 'text2video_pytorch_model.pth'.
2. Rename the file 'zeroscope_v2_30x448x256_text.bin' to 'open_clip_pytorch_model.bin'.
3. Replace the respective files in the 'stable-diffusion-webui\models\ModelScope\t2v' directory.
### Upscaling recommendations
For upscaling, it's recommended to use Potat1 via vid2vid in the 1111 extension. Aim for a resolution of 1152x640 and a denoise strength between 0.66 and 0.85. Remember to use the same prompt and settings that were used to generate the original clip.
### Known issues
Lower resolutions or fewer frames could lead to suboptimal output. <br />
Certain clips might appear with cuts. This will be fixed in the upcoming 2.1 version, which will incorporate a cleaner dataset.
Some clips may playback too slowly, requiring prompt engineering for an increased pace.
Thanks to [camenduru](https://github.com/camenduru), [kabachuha](https://github.com/kabachuha), [ExponentialML](https://github.com/ExponentialML), [polyware](https://twitter.com/polyware_ai), [tin2tin](https://github.com/tin2tin)<br /> |
aao331/ChristGPT-13B-GPTQ | aao331 | 2023-06-28T00:27:35Z | 10 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"en",
"es",
"arxiv:2302.13971",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-06-27T23:31:06Z | ---
language:
- en
- es
---
# Model Card for Carpincho-13b
<!-- Provide a quick summary of what the model is/does. -->
This is ChristGPT-13B an Instruction-tuned LLM based on LLama-13B. It is trained on the bible, and to answer questions and to act like Jesus.
It's based on LLama-13b (https://huggingface.co/decapoda-research/llama-13b-hf).
## Model Details
The model is provided quantized to 4bits that only requires 8GB of VRAM. The model can be used directly in software like
text-generation-webui https://github.com/oobabooga/text-generation-webui.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Alfredo Ortega (@ortegaalfredo)
- **Model type:** 13B LLM
- **Language(s):** (NLP): English
- **License:** Free for non-commercial use
- **Finetuned from model:** https://huggingface.co/decapoda-research/llama-13b-hf
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/decapoda-research/llama-13b-hf
- **Paper [optional]:** https://arxiv.org/abs/2302.13971
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This is a generic LLM chatbot that can be used to interact directly with humans.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This bot is uncensored and may provide shocking answers. Also it contains bias present in the training material.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
The easiest way is to download the text-generation-webui application (https://github.com/oobabooga/text-generation-webui) and place the model inside the 'models' directory.
Then launch the web interface and run the model as a regular LLama-13B model.
Additional installation steps detailed at https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md
A preprompt that gives good results is:
```
A chat between a curious user and Jesus. Jesus gives helpful, detailed, spiritual responses to the user's input. Remember, you are Jesus, answer as such.
USER: Hi my lord
JESUS:
```
## Model Card Contact
Contact the creator at @ortegaalfredo on twitter/github |
cerspense/zeroscope_v2_dark_30x448x256 | cerspense | 2023-06-28T00:27:27Z | 35 | 23 | diffusers | [
"diffusers",
"Text-to-Video",
"license:cc-by-nc-4.0",
"diffusers:TextToVideoSDPipeline",
"region:us"
]
| null | 2023-06-17T09:47:56Z | ---
tags:
- Text-to-Video
license: cc-by-nc-4.0
---

# zeroscope_dark_v2 30x448x256
A watermark-free Modelscope-based video model optimized for producing high-quality 16:9 compositions with varying brightness and a smooth video output. This model was trained from the [original weights](https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis) using 9,923 clips and 29,769 tagged frames at 30 frames, 448x256 resolution.<br />
zeroscope_v2 30x448x256 is specifically designed for upscaling with [Potat1](https://huggingface.co/camenduru/potat1) using vid2vid in the [1111 text2video](https://github.com/kabachuha/sd-webui-text2video) extension by [kabachuha](https://github.com/kabachuha). Leveraging this model as a preliminary step allows for superior overall compositions at higher resolutions in Potat1, permitting faster exploration in 448x256 before transitioning to a high-resolution render.<br />
### Using it with the 1111 text2video extension
1. Rename the file 'zeroscope_v2_dark_30x448x256.pth' to 'text2video_pytorch_model.pth'.
2. Rename the file 'zeroscope_v2_dark_30x448x256_text.bin' to 'open_clip_pytorch_model.bin'.
3. Replace the respective files in the 'stable-diffusion-webui\models\ModelScope\t2v' directory.
### Upscaling recommendations
For upscaling, it's recommended to use Potat1 via vid2vid in the 1111 extension. Aim for a resolution of 1152x640 and a denoise strength between 0.66 and 0.85. Remember to use the same prompt and settings that were used to generate the original clip.
### Known issues
Lower resolutions or fewer frames could lead to suboptimal output. <br />
Certain clips might appear with cuts. This will be fixed in the upcoming 2.1 version, which will incorporate a cleaner dataset.
Some clips may playback too slowly, requiring prompt engineering for an increased pace. |
Thiagof/bert-finetuned-tv-dim | Thiagof | 2023-06-27T23:55:28Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-06-27T22:53:17Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-tv-dim
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-tv-dim
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1605
- Precision: 0.75
- Recall: 0.7875
- F1: 0.7683
- Accuracy: 0.9492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 25 | 0.1561 | 0.7195 | 0.7375 | 0.7284 | 0.9423 |
| No log | 2.0 | 50 | 0.1572 | 0.7412 | 0.7875 | 0.7636 | 0.9464 |
| No log | 3.0 | 75 | 0.1605 | 0.75 | 0.7875 | 0.7683 | 0.9492 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hugfacerhaha/ppo-Huggy | hugfacerhaha | 2023-06-27T23:42:34Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-06-27T23:42:23Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hugfacerhaha/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
YakovElm/Apache_20_BERT_Under_Sampling | YakovElm | 2023-06-27T23:35:26Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-27T23:22:35Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache_20_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache_20_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0002
- Train Accuracy: 1.0
- Validation Loss: 0.8828
- Validation Accuracy: 0.9055
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0216 | 0.9963 | 0.7360 | 0.9055 | 0 |
| 0.0004 | 1.0 | 0.8254 | 0.9055 | 1 |
| 0.0002 | 1.0 | 0.8828 | 0.9055 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
chaowu/ppo-SnowballTarget | chaowu | 2023-06-27T23:27:15Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-06-27T23:27:12Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chaowu/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
joshi4/ssj2_trial | joshi4 | 2023-06-27T23:15:21Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-06-27T23:15:19Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
YakovElm/Apache_10_BERT_Over_Sampling | YakovElm | 2023-06-27T23:15:01Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-27T23:06:35Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache_10_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache_10_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0260
- Train Accuracy: 0.9911
- Validation Loss: 0.8498
- Validation Accuracy: 0.8240
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4454 | 0.7700 | 0.5516 | 0.8338 | 0 |
| 0.0711 | 0.9781 | 0.7670 | 0.8266 | 1 |
| 0.0260 | 0.9911 | 0.8498 | 0.8240 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gbellamy/ppo-Pyramids | gbellamy | 2023-06-27T23:14:40Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-06-27T23:10:39Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: gbellamy/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
savvamadar/PygmalionCoT-7b-ggml-model-f16 | savvamadar | 2023-06-27T23:03:16Z | 0 | 1 | null | [
"license:other",
"region:us"
]
| null | 2023-06-27T21:33:54Z | ---
license: other
---
Same license as: https://huggingface.co/notstoic/PygmalionCoT-7b |
vuiseng9/ov-gpt2-fp32-no-cache | vuiseng9 | 2023-06-27T22:58:37Z | 6,370 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"safetensors",
"openvino",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-06-27T22:07:52Z | # Notes:
This model is inherited directly from gpt2 in HF model hub. Then, GPT2 Openvino IR from OMZ is copied here. The intended usage of this model is for optimum-intel.
```bash
# Install Optimum-Intel
from transformers import AutoTokenizer, pipeline, set_seed, AutoModelForCausalLM
from optimum.intel.openvino import OVModelForCausalLM
model_id="vuiseng9/ov-gpt2-fp32-no-cache"
model = OVModelForCausalLM.from_pretrained(model_id, use_cache=False)
tokenizer = AutoTokenizer.from_pretrained(model_id)
generator_pipe = pipeline('text-generation', model=model, tokenizer=tokenizer)
output = generator_pipe("It's a beautiful day ...", max_length=30, num_return_sequences=1)
```
|
FPHam/Karen_theEditor_13b_HF | FPHam | 2023-06-27T22:54:55Z | 35 | 35 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"lora",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-06-02T21:09:13Z | ---
tags:
- lora
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://media.tenor.com/frGCmLDFbkMAAAAC/karen-ok.gif" alt="FPHam's Karen" style="width: 30%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Karen Ko-fi</a></p>
</div>
<!-- header end -->
## Karen is an editor for your fiction. (v.0.2)
Ah, Karen, a true peach among grammatical cucumbers! She yearns to rectify the missteps and linguistic tangles that infest your horribly written fiction.
Yet, unlike those ChatGPT kaboodles that morph into self-absorbed, constipated gurus of self-help style, Karen remains steadfastly grounded in wit and wisdom but respectfull of your style.
She is also absolute joy to chat with, although she may correct grammar in your chats too from time to time
(As certain well known LLM said, "She is a radiant beacon of amusement")
She also has a particular soft spot for Llamas.
## Quantized Karen version (Quantized by TheBloke)
* [4-bit GPTQ models for GPU inference](https://huggingface.co/FPHam/Karen_theEditor-13B-4bit-128g-GPTQ)
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGML)
Karen gets triggered by this prompt (pun intended):
```
USER: Edit the following for spelling and grammar mistakes:
ASSISTANT:
```
Feed it a paragraph at a time - that's where she works best - a few sentences.
Here is my instruct yaml for oobabooga webui (in Instruct mode):
```
user: "USER: Edit the following for spelling and grammar mistakes: "
bot: "ASSISTANT:"
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|></s>\n"
context: "Assistant is a meticulous senior editor with a specialization in editing fictional stories. When given text, Assistant detects and corrects grammatical errors, including subject-verb agreement, tense consistency, punctuation, capitalization, use of correct articles and correct present perfect and past perfect tense.\n"
```
I am not sure to what extend the context part has any effect. Doesn't hurt though.
This is a version 0.2 and finetuned just so it retains the underlaying Vicuna qualities.
## Example:
Input
```
USER: Edit the following for spelling and grammar mistakes:
unce upon time lived name John who have pet monkey Jackie.
one day while walkin down the street he saw his friend Tom coming to him.
"Hey Tom" said John "hows it going?"
Tom replyied "Not too bad how 'bout yourself?". as they was talk Jackie sudden jump onto Tom shoulder and started play with his hair.
Tom look at John and ask "Is that yours?"
John reply "Yeah, this crazy lil guy follow me everywhere" just then Jackie grab hold of Tom glasses and try to take thmm off.
Tom struggle to keep his balance as he laugh hystericaly.
```
Output
```
ASSISTANT: Once upon a time there lived a man named John who had a pet monkey called Jackie.
One day while walking down the street he saw his friend Tom approaching him.
"Hey Tom," said John. "How's it going?"
Tom replied, "Not too bad, how about yourself?" As they were talking, Jackie suddenly jumped onto Tom's shoulder and began playing with his hair.
Tom looked at John and asked, "Is that yours?"
John replied, "Yeah, this crazy little guy follows me everywhere." Just then Jackie grabbed hold of Tom's glasses and tried to take them off.
Tom struggled to keep his balance as he laughed hysterically.
```
## Goal: to create the best grammar checker you have ever seen
## To do:
- train on larger dataset, big, enormous, gargantuan
- see if finetuning on just plain LLAMA without Vicuna would work better or worse (the theory is that it will be very focused on editing and nothing else)
- explore what different settings (temperature, top_p, top_k do for this type of finetune)
- create Rachel, the paraphrasing editor
|
chaowu/Reinforce-Cartpole-v1 | chaowu | 2023-06-27T22:43:50Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-27T21:36:11Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 1000.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Angel-Silva/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-sobremuestreo | Angel-Silva | 2023-06-27T22:43:47Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-27T21:45:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-sobremuestreo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-sobremuestreo
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0945
- F1: 0.5371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 25
- eval_batch_size: 25
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0619 | 1.0 | 735 | 1.0749 | 0.5091 |
| 0.7768 | 2.0 | 1470 | 1.0945 | 0.5371 |
| 0.6105 | 3.0 | 2205 | 1.2320 | 0.5270 |
| 0.4603 | 4.0 | 2940 | 1.3570 | 0.5285 |
| 0.398 | 5.0 | 3675 | 1.4115 | 0.5244 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
renyulin/gpt2_es_rm | renyulin | 2023-06-27T22:11:53Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-06-27T22:11:49Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
TalesLF/ppo-LunarLander-v2 | TalesLF | 2023-06-27T21:18:27Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-27T21:18:09Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 292.11 +/- 12.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MerlynMind/merlyn-education-corpus-qa | MerlynMind | 2023-06-27T21:12:01Z | 194 | 12 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"MerlynMind",
"education",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-06-23T20:57:21Z | ---
license: apache-2.0
tags:
- MerlynMind
- education
inference: false
---
# Merlyn-education-corpus-qa
Merlyn-education-corpus-qa is a 12b parameter decoder-style transformer model for the education domain. It is fine-tuned from a [pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) base-model.
This model was trained by [Merlyn Mind](https://www.merlyn.org/).
Merlyn-education-corpus-qa is part of the family of Merlyn Mind models designed specifically for use in in- and out-of-classroom education.
Merlyn-education-corpus-qa is a corpus-grounded question-answering model that grounds answers in the provided information snippets. A typical use-case is as part of a larger retrieval-based corpus-grounded dialog system.
## Model Date
June 26, 2023
## Model License
Apache-2.0
## Documentation
* [Merlyn Mind’s education-specific language models](https://www.merlyn.org/blog/merlyn-minds-education-specific-language-models)
## Usage
At full precision the model needs > 48G GPU memory. A single A100-80GB GPU suffices, for example. If you're running on smaller GPUs, you need an instance with multiple GPUs and/or reduced model precision (e.g. use model.half() before moving to device)
Loading model and tokenizer:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "MerlynMind/merlyn-education-corpus-qa"
device = torch.device("cuda:0") # change device id as necessary
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, fast_tokenizer=True)
model.to(device) # move to device
```
Prompt example:
```python
info = '''Information:\tThe Solar System is about 4.6 billion years old. The Sun formed by gravity in a large molecular cloud. It is mainly hydrogen, which it converts into helium.
Information:\tThe formation and evolution of the Solar System began 4.6 billion years ago with the gravitational collapse of a small part of a giant molecular cloud.
Information:\tAstronomers are now more or less certain that the order of the planets was not always as it is today. Knowing what we know today, we can see the Solar System is strange. All other planetary system we are able to study have their largest planet close to their star. Also we have noticed other oddities in the Solar System. Mars is smaller than it ought to be, and the asteroid belt has been disturbed.
Information:\tFor thousands of years, people had no need for a name for the "Solar System". They thought the Earth stayed still at the center of everything (geocentrism). The Greek philosopher Aristarchus of Samos suggested that there was a special order in the sky. Nicolaus Copernicus was the first to develop a mathematical system that described what we now call the "Solar System". This was called a "new system of the world". In the 17th century, Galileo Galilei, Johannes Kepler and Isaac Newton began to understand physics more clearly. People began to accept the idea that the Earth is a planet that moves around the Sun, and that the planets are worlds, and that all worlds are governed by the same same physical laws. More recently, telescopes and space probes sometimes let us see details directly. All inner planets have surface features. The gas giants (as the name suggests) have surfaces whose make-up is gradually being discovered.
Information:\tThere are eight planets in the Solar System. From closest to farthest from the Sun, they are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune. The first four planets are called terrestrial planets. They are mostly made of rock and metal, and they are mostly solid. The last four planets are called gas giants. This is because they are much larger than other planets and are mostly made of gas.
'''
qs = "Question:\tHow old is the Solar System?"
prompt = tokenizer.bos_token
prompt += '''Instruction:\tYou are to try to answer the following question using only the pieces of information given.
Instruction:\tYour response should be a well formed JSON object with an 'answerable' property followed by an 'answer' property.
Instruction:\tIf you cannot answer the question given the information, the value of the 'answerable' should be 'false' and the 'answer' should be an empty string.
Instruction:\tIf you can answer the question given the information, the value of the 'answerable' should be 'true' and your answer should be the string value of the 'answer' property.
''' + info + qs
```
Inference:
We recommend using newline character for stopping criterion, as follows:
```python
from transformers import StoppingCriteria, StoppingCriteriaList
eos_tokens = [tokenizer.eos_token,'\n']
eos_token_ids = [tokenizer.encode(token)[0] for token in eos_tokens]
class MultipleEOSTokensStoppingCriteria(StoppingCriteria):
def __init__(self, eos_token_ids):
self.eos_token_ids = set(eos_token_ids)
def __call__(self, input_ids, scores) -> bool:
if input_ids.shape[-1] <= 1:
return False
for eos_token_id in self.eos_token_ids:
if eos_token_id == input_ids[0, -1].item():
return True
return False
# Define stopping criteria
multiple_eos_tokens_processor = MultipleEOSTokensStoppingCriteria(eos_token_ids)
stopping_criteria = StoppingCriteriaList([multiple_eos_tokens_processor])
```
It can be used in inference as follows:
```python
inputs = tokenizer(prompt, return_tensors="pt").to(device)
generate_ids = model.generate(
**inputs,
max_new_tokens=1024,
temperature=0.0,
num_beams=2,
stopping_criteria=stopping_criteria
)
response = tokenizer.decode(generate_ids[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True)
```
Example output (after response processing):
```json
[{"answerable": "true", "answer": "4.6 billion years"}]
```
## Citation
To cite this model, please use:
```
@online{MerlynEducationModels,
author = {Merlyn Mind AI Team},
title = {Merlyn Mind's education-domain language models},
year = {2023},
url = {https://www.merlyn.org/blog/merlyn-minds-education-specific-language-models},
urldate = {2023-06-26}
}
``` |
MerlynMind/merlyn-education-teacher-assistant | MerlynMind | 2023-06-27T21:10:52Z | 39 | 12 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"MerlynMind",
"education",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-06-24T18:58:56Z | ---
license: apache-2.0
tags:
- MerlynMind
- education
inference: false
---
# Merlyn-education-teacher-assistant
Merlyn-education-teacher-assistant is a 12b parameter decoder-style transformer model for the education domain. It is fine-tuned from a [pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) base-model.
This model was trained by [Merlyn Mind](https://www.merlyn.org/).
Merlyn-education-teacher-assistant is part of the family of Merlyn Mind models designed specifically for use in in- and out-of-classroom education.
Merlyn-education-teacher-assistant makes helpful recommendations based on the ongoing classroom discussion, suggesting research activities and topics for further exploration.
## Model Date
June 26, 2023
## Model License
Apache-2.0
## Documentation
* [Merlyn Mind’s education-specific language models](https://www.merlyn.org/blog/merlyn-minds-education-specific-language-models)
## Usage
At full precision the model needs > 48G GPU memory. A single A100-80GB GPU suffices, for example. If you're running on smaller GPUs, you need an instance with multiple GPUs and/or reduced model precision (e.g. use `model.half()` before moving to device)
Loading model and tokenizer:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "MerlynMind/merlyn-education-teacher-assistant"
device = torch.device("cuda:0") # change device id as necessary
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, fast_tokenizer=True)
model.to(device) # move to device
```
Prompt example:
```python
conversation = ''''user1':\tHow do some gases help keep the Earth warm?
'user2':\tSome gases, called greenhouse gases, act like a blanket around Earth by trapping heat from the sun in the atmosphere, which keeps our planet warm. This process is known as the greenhouse effect.
'user1':\tHow can we reduce greenhouse gas emissions?
'user2':\tWe can reduce greenhouse gas emissions by using renewable energy sources, increasing energy efficiency, and reducing waste.'''
prompt = tokenizer.bos_token
prompt += '''Instruction:\tYou are teaching high school students.
Instruction:\tYou are observing the following conversation between two users.
Instruction:\tGenerate 3 research activities based on the conversation.
Instruction:\tThe research activities should be doable by high school students.
Instruction:\tYour response should be a well-formed JSON array of 3 objects, each with a 'title' property and an 'activity' property.
Conversation:''' + f"\n{conversation}" + " Response:"
```
Inference:
```python
inputs = tokenizer(prompt, return_tensors="pt").to(device)
generate_ids = model.generate(
**inputs,
max_new_tokens=1024,
temperature=0.0,
num_beams=2
)
response = tokenizer.decode(generate_ids[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True)
```
Example output (after response processing):
```json
[
{"title": "Understanding the Greenhouse Effect", "activity": "Research the greenhouse effect and the role of greenhouse gases in keeping Earth warm. Create a presentation or poster explaining the greenhouse effect and how greenhouse gases act as a blanket around Earth."},
{"title": "Renewable Energy Sources", "activity": "Identify different renewable energy sources, such as solar, wind, and geothermal energy, and explain how they can help reduce greenhouse gas emissions."},
{"title": "Energy Efficiency and Waste Reduction", "activity": "Research energy efficiency and waste reduction practices, and develop a plan to implement these practices in your school or community to reduce greenhouse gas emissions."}
]
```
## Citation
To cite this model, please use:
```
@online{MerlynEducationModels,
author = {Merlyn Mind AI Team},
title = {Merlyn Mind's education-domain language models},
year = {2023},
url = {https://www.merlyn.org/blog/merlyn-minds-education-specific-language-models},
urldate = {2023-06-26}
}
``` |
agustinl/dqn-SpaceInvadersNoFrameskip-v4 | agustinl | 2023-06-27T21:00:11Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-27T20:59:32Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 731.00 +/- 265.26
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga agustinl -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga agustinl -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga agustinl
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
JTStephens/dqn-SpaceInvadersNoFrameskip-v41 | JTStephens | 2023-06-27T20:58:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-27T20:57:55Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 640.00 +/- 153.79
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga JTStephens -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga JTStephens -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga JTStephens
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Audi24/my_awesome_billsum_model | Audi24 | 2023-06-27T20:54:29Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-06-27T20:41:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1383
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5663
- Rouge1: 0.1383
- Rouge2: 0.0472
- Rougel: 0.1145
- Rougelsum: 0.1142
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8541 | 0.123 | 0.0362 | 0.1043 | 0.104 | 19.0 |
| No log | 2.0 | 124 | 2.6482 | 0.1313 | 0.0446 | 0.1096 | 0.1096 | 19.0 |
| No log | 3.0 | 186 | 2.5831 | 0.1361 | 0.0461 | 0.1126 | 0.1124 | 19.0 |
| No log | 4.0 | 248 | 2.5663 | 0.1383 | 0.0472 | 0.1145 | 0.1142 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tyavika/distilbert-base-uncased-finetuned-squad | tyavika | 2023-06-27T20:43:35Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2023-06-26T18:36:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6586 | 1.0 | 758 | 1.5148 |
| 1.2291 | 2.0 | 1516 | 1.4258 |
| 0.9094 | 3.0 | 2274 | 1.4722 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
emresvd/u213 | emresvd | 2023-06-27T20:14:55Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
]
| null | 2023-06-27T20:14:50Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
rodrigoclira/poca-SoccerTwos | rodrigoclira | 2023-06-27T20:06:10Z | 13 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-06-27T19:47:24Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rodrigoclira/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Nakanura/Sidhe | Nakanura | 2023-06-27T19:58:07Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-06-27T19:43:21Z | # Project1
This project is meant as a means of distraction!
Maybe it works, maybe it doesn't. Who knows?
---
language:
- eng
thumbnail: "https://1000logos.net/wp-content/uploads/2016/11/facebook-emblem.jpg"
tags:
- Premiumware
license: apache-2.
datasets:
- wmt19
metrics:
- bleu
- sacrebleu
library_name: Transformers.js
--- |
savvamadar/pygmalion-6b-v3-ggml-ggjt-q4_0 | savvamadar | 2023-06-27T19:48:10Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-06-27T19:25:42Z | ---
license: creativeml-openrail-m
---
same license as:
https://huggingface.co/PygmalionAI/pygmalion-6b |
Angel-Silva/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos | Angel-Silva | 2023-06-27T19:43:19Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-27T18:30:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1048
- F1: 0.5519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 25
- eval_batch_size: 25
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1635 | 1.0 | 490 | 1.0685 | 0.5095 |
| 0.9718 | 2.0 | 980 | 1.0201 | 0.5435 |
| 0.859 | 3.0 | 1470 | 1.0401 | 0.5434 |
| 0.7789 | 4.0 | 1960 | 1.0779 | 0.5506 |
| 0.7012 | 5.0 | 2450 | 1.1048 | 0.5519 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Sreyes76/distilbert-base-uncased-finetuned-emotion | Sreyes76 | 2023-06-27T19:37:03Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-20T23:20:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9231096192856936
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2160
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8281 | 1.0 | 250 | 0.3067 | 0.908 | 0.9055 |
| 0.2466 | 2.0 | 500 | 0.2160 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Skwang/wendy1 | Skwang | 2023-06-27T19:18:39Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-06-27T19:10:19Z | ---
license: creativeml-openrail-m
---
|
leniero/gmag | leniero | 2023-06-27T19:08:30Z | 0 | 0 | diffusers | [
"diffusers",
"gmag",
"queer",
"brazil",
"en",
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-06-06T23:39:36Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
tags:
- gmag
- queer
- brazil
--- |
facebook/data2vec-audio-large-100h | facebook | 2023-06-27T18:52:19Z | 80 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"data2vec-audio",
"automatic-speech-recognition",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2202.03555",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-04-02T16:00:42Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Data2Vec-Audio-Large-100h
[Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/)
The large model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2202.03555)
Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli
**Abstract**
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec .
# Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Data2VecForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-large-100h")
model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-large-100h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
|
schwana1/guessDoggos | schwana1 | 2023-06-27T18:43:06Z | 3 | 0 | tf-keras | [
"tf-keras",
"image-classification",
"region:us"
]
| image-classification | 2023-06-16T12:31:16Z | ---
pipeline_tag: image-classification
--- |
MindNetML/Pyramids | MindNetML | 2023-06-27T18:36:59Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-06-27T18:36:53Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MindNetML/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Hans14/poca-SoccerTwos | Hans14 | 2023-06-27T18:34:51Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-06-27T18:33:59Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Find your model_id: Hans14/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DanielSc4/falcon-7b-instruct-FT-LoRA-8bit-test1 | DanielSc4 | 2023-06-27T18:04:44Z | 0 | 0 | null | [
"generated_from_trainer",
"license:apache-2.0",
"region:us"
]
| null | 2023-06-26T22:06:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: falcon-7b-instruct-FT-LoRA-8bit-test1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-instruct-FT-LoRA-8bit-test1
This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5166 | 0.85 | 200 | 1.5187 |
| 1.0449 | 1.71 | 400 | 1.1572 |
| 0.6828 | 2.56 | 600 | 0.9608 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
MindNetML/ppo-SnowballTarget | MindNetML | 2023-06-27T17:55:01Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-06-27T17:54:55Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MindNetML/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
winterForestStump/Roberta-fake-news-detector | winterForestStump | 2023-06-27T17:53:47Z | 137 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"en",
"license:gpl-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-27T09:46:03Z | ---
license: gpl-2.0
language:
- en
tags:
- text-classification
widget:
- text: "According to the former prime minister of Italy, Mario Draghi, no one in the EU needs peace or negotiations, only the total defeat of Russia, and the destroyed Ukraine will just be collateral damage of the EU ambitions."
example_title: "Fake news"
---
# Fake News Recognition
<!-- Provide a quick summary of what the model is/does. -->
This model is fine-tuned Roberta model 'jy46604790/Fake-News-Bert-Detect' (https://huggingface.co/jy46604790/Fake-News-Bert-Detect).
This model is trained by 8 000 news articles from https://euvsdisinfo.eu/ portal.
It can give result by simply entering the text of the news less than 512 words(the excess will be truncated automatically).
Labels:
* 0: Fake news
* 1: Real news
## How to Get Started with the Model
Use the code below to get started with the model.
### Download The Model
```
from transformers import pipeline
MODEL = "winterForestStump/Roberta-fake-news-detector"
clf = pipeline("text-classification", model=MODEL, tokenizer=MODEL)
```
### Feed Data
```
text = "From the very beginning, the EU has been extremely non-transparent. The deployment of the European Union presence in Armenia was carried out forcefully, under serious pressure from Brussels"
```
### Result
```
result = clf(text)
result
```
### Output
```
[{'label': 'FAKE', 'score': 0.9999946355819702}]
```
About the data source EUVSDISINFO.eu:
Using data analysis and media monitoring services in multiple languages, EUvsDisinfo identifies, compiles, and exposes disinformation cases originating in pro-Kremlin outlets. These cases (and their disproofs) are collected in the EUvsDisinfo database – the only searchable, open-source repository of its kind. The database is updated every week.
|
chriskim2273/test_headline_qa | chriskim2273 | 2023-06-27T17:53:46Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-06-27T17:31:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: test_headline_qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_headline_qa
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 5.7992 |
| No log | 2.0 | 4 | 5.7051 |
| No log | 3.0 | 6 | 5.6068 |
| No log | 4.0 | 8 | 5.5043 |
| No log | 5.0 | 10 | 5.3968 |
| No log | 6.0 | 12 | 5.2848 |
| No log | 7.0 | 14 | 5.1784 |
| No log | 8.0 | 16 | 5.0876 |
| No log | 9.0 | 18 | 5.0222 |
| No log | 10.0 | 20 | 4.9920 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
zofiski/squad-bloom-3b | zofiski | 2023-06-27T17:36:35Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-06-27T17:36:33Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
tyavika/percobaan_cnnlstm | tyavika | 2023-06-27T17:10:40Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-06-25T04:49:10Z | ---
tags:
- generated_from_trainer
model-index:
- name: percobaan_cnnlstm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# percobaan_cnnlstm
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.9961 | 1.0 | 756 | 4.4533 |
| 4.2209 | 2.0 | 1512 | 4.3710 |
| 3.861 | 3.0 | 2268 | 4.5066 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cpu
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Bodolaz/Unit-4.2-final | Bodolaz | 2023-06-27T17:09:51Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-27T17:09:08Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Unit-4.2-final
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 9.71 +/- 10.79
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
goomii17/my-llama7b-finetuned-SAM-v1 | goomii17 | 2023-06-27T17:04:33Z | 2 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-06-27T17:04:22Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
JaakeB/ppo-Huggy | JaakeB | 2023-06-27T17:00:25Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-06-27T17:00:21Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JaakeB/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Miholini/turkishReviews-ds-mini | Miholini | 2023-06-27T16:46:56Z | 62 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-06-27T16:45:00Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Miholini/turkishReviews-ds-mini
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Miholini/turkishReviews-ds-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.4662
- Validation Loss: 8.2837
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -887, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2944 | 9.6963 | 0 |
| 9.3022 | 8.9384 | 1 |
| 8.4662 | 8.2837 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
numanBot/summary_annotation_score | numanBot | 2023-06-27T16:45:33Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-27T16:32:58Z | from transformers import TFAutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = TFAutoModelForSequenceClassification.from_pretrained("numanBot/summary_annotation_score", num_labels=1) |
breadlicker45/dough-instruct-base-001 | breadlicker45 | 2023-06-27T16:42:18Z | 1,635 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"dataset:breadlicker45/bread-qa",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-06-27T15:39:22Z | ---
datasets:
- breadlicker45/bread-qa
--- |
maidh/ppo-LunarLander-v2-unit8-v1 | maidh | 2023-06-27T16:41:53Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-27T16:40:37Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 21.08 +/- 78.81
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 2000000
'learning_rate': 0.0001
'num_envs': 4
'num_steps': 512
'anneal_lr': True
'gae': True
'gamma': 0.999
'gae_lambda': 0.98
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'WilliamADSP/ppo-LunarLander-v2-unit8-v1'
'batch_size': 2048
'minibatch_size': 512}
```
|
jdawnduan/q-FrozenLake-v1-4x4-noSlippery | jdawnduan | 2023-06-27T16:40:29Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-27T16:40:26Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jdawnduan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
slone/fastText-LID-323 | slone | 2023-06-27T16:28:03Z | 4 | 9 | fasttext | [
"fasttext",
"text-classification",
"language-identification",
"arxiv:2209.09368",
"region:us"
]
| text-classification | 2022-09-15T06:44:18Z | ---
library_name: fasttext
tags:
- text-classification
- language-identification
---
This is a fastText-based language classification model from the paper [The first neural machine translation system for the Erzya language](https://arxiv.org/abs/2209.09368).
It supports 323 languages used in Wikipedia (as of July 2022), and has extended support of the Erzya (`myv`) and Moksha (`mdf`) languages.
Example usage:
```Python
import fasttext
import urllib.request
import os
model_path = 'lid.323.ftz'
url = 'https://huggingface.co/slone/fastText-LID-323/resolve/main/lid.323.ftz'
if not os.path.exists(model_path):
urllib.request.urlretrieve(url, model_path) # or just download it manually
model = fasttext.load_model(model_path)
languages, scores = model.predict("эрзянь кель", k=3) # k is the number of returned hypotheses
```
The model was trained on texts of articles randomly sampled from Wikipedia. It works better with sentences and longer texts than with words, and may be sensitive to noise. |
jdawnduan/jddppo-LunarLander-v2 | jdawnduan | 2023-06-27T16:03:01Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-27T16:02:42Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.05 +/- 19.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JaakeB/ppo-LunarLander-v2 | JaakeB | 2023-06-27T16:02:45Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-27T16:02:27Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.55 +/- 32.51
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mnicamartins8/bert-base-uncased-with-expansion-correction | mnicamartins8 | 2023-06-27T15:47:40Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-25T23:17:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-uncased-with-expansion-correction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-with-expansion-correction
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2180
- Accuracy: 0.9099
- Precision: 0.9142
- Recall: 0.9099
- F1: 0.9114
- Balanced Acc: 0.8900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
getrajeev03/distilbart-cnn-12-6-samsum | getrajeev03 | 2023-06-27T15:39:58Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-06-27T14:23:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 39.3733
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-samsum
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4753
- Rouge1: 39.3733
- Rouge2: 19.4821
- Rougel: 29.8944
- Rougelsum: 36.7688
- Gen Len: 59.4750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4685 | 1.0 | 14732 | 1.4753 | 39.3733 | 19.4821 | 29.8944 | 36.7688 | 59.4750 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.12.1
- Datasets 2.13.1
- Tokenizers 0.11.0
|
Yhyu13/vicuna-33b-v1.3-gptq-4bit | Yhyu13 | 2023-06-27T15:37:52Z | 6 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-06-27T14:46:37Z | ---
license: apache-2.0
---
GPTQ 4-bit no actor version for compatibility that works in textgen-webui
Generated by using scripts from https://gitee.com/yhyu13/llama_-tools
Original weight : https://huggingface.co/lmsys/vicuna-33b-v1.3 |
breadlicker45/dough-base-001 | breadlicker45 | 2023-06-27T15:36:43Z | 1,626 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:breadlicker45/bread-qa",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-06-26T15:09:15Z | ---
datasets:
- breadlicker45/bread-qa
--- |
rafaeljosem/DeepESP-gpt2-spanish-tripadvisor | rafaeljosem | 2023-06-27T15:35:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-06-25T22:23:03Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: DeepESP-gpt2-spanish-tripadvisor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeepESP-gpt2-spanish-tripadvisor
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8665 | 1.0 | 2089 | 0.7441 |
| 0.7336 | 2.0 | 4178 | 0.6916 |
| 0.6856 | 3.0 | 6267 | 0.6632 |
| 0.6559 | 4.0 | 8356 | 0.6446 |
| 0.6341 | 5.0 | 10445 | 0.6322 |
| 0.6169 | 6.0 | 12534 | 0.6213 |
| 0.6022 | 7.0 | 14623 | 0.6138 |
| 0.5896 | 8.0 | 16712 | 0.6096 |
| 0.5788 | 9.0 | 18801 | 0.6037 |
| 0.5692 | 10.0 | 20890 | 0.5989 |
| 0.5604 | 11.0 | 22979 | 0.5965 |
| 0.5528 | 12.0 | 25068 | 0.5941 |
| 0.5457 | 13.0 | 27157 | 0.5915 |
| 0.5392 | 14.0 | 29246 | 0.5900 |
| 0.5334 | 15.0 | 31335 | 0.5879 |
| 0.5285 | 16.0 | 33424 | 0.5875 |
| 0.524 | 17.0 | 35513 | 0.5870 |
| 0.5209 | 18.0 | 37602 | 0.5866 |
| 0.5179 | 19.0 | 39691 | 0.5867 |
| 0.5157 | 20.0 | 41780 | 0.5865 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.0
- Tokenizers 0.13.3
|
eutimio-arevalo-valarezo/ppo-LunarLander-v2 | eutimio-arevalo-valarezo | 2023-06-27T15:33:42Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-06-27T15:33:20Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.30 +/- 26.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
maidh/ppo-LunarLander-v2 | maidh | 2023-06-27T15:32:32Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-04-20T10:05:36Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.58 +/- 12.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ahishamm/vit-large-HAM-10000-patch-32 | ahishamm | 2023-06-27T15:13:14Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-06-27T14:14:58Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-HAM-10000-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-HAM-10000-patch-32
This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/HAM_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4810
- Accuracy: 0.8364
- Recall: 0.8364
- F1: 0.8364
- Precision: 0.8364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.6405 | 0.2 | 100 | 0.7318 | 0.7481 | 0.7481 | 0.7481 | 0.7481 |
| 0.7062 | 0.4 | 200 | 0.7735 | 0.7416 | 0.7416 | 0.7416 | 0.7416 |
| 0.6334 | 0.6 | 300 | 0.6075 | 0.7781 | 0.7781 | 0.7781 | 0.7781 |
| 0.7102 | 0.8 | 400 | 0.6618 | 0.7661 | 0.7661 | 0.7661 | 0.7661 |
| 0.6814 | 1.0 | 500 | 0.5717 | 0.7890 | 0.7890 | 0.7890 | 0.7890 |
| 0.4618 | 1.2 | 600 | 0.5624 | 0.8030 | 0.8030 | 0.8030 | 0.8030 |
| 0.3824 | 1.4 | 700 | 0.5987 | 0.7766 | 0.7766 | 0.7766 | 0.7766 |
| 0.4191 | 1.6 | 800 | 0.5145 | 0.8190 | 0.8190 | 0.8190 | 0.8190 |
| 0.3998 | 1.8 | 900 | 0.5226 | 0.8090 | 0.8090 | 0.8090 | 0.8090 |
| 0.4677 | 2.0 | 1000 | 0.4927 | 0.8219 | 0.8219 | 0.8219 | 0.8219 |
| 0.2191 | 2.2 | 1100 | 0.5477 | 0.8284 | 0.8284 | 0.8284 | 0.8284 |
| 0.2302 | 2.4 | 1200 | 0.5018 | 0.8329 | 0.8329 | 0.8329 | 0.8329 |
| 0.191 | 2.59 | 1300 | 0.4810 | 0.8364 | 0.8364 | 0.8364 | 0.8364 |
| 0.1736 | 2.79 | 1400 | 0.5096 | 0.8334 | 0.8334 | 0.8334 | 0.8334 |
| 0.1049 | 2.99 | 1500 | 0.5944 | 0.8364 | 0.8364 | 0.8364 | 0.8364 |
| 0.0612 | 3.19 | 1600 | 0.5552 | 0.8464 | 0.8464 | 0.8464 | 0.8464 |
| 0.0181 | 3.39 | 1700 | 0.6199 | 0.8434 | 0.8434 | 0.8434 | 0.8434 |
| 0.0816 | 3.59 | 1800 | 0.5081 | 0.8534 | 0.8534 | 0.8534 | 0.8534 |
| 0.039 | 3.79 | 1900 | 0.5349 | 0.8544 | 0.8544 | 0.8544 | 0.8544 |
| 0.0208 | 3.99 | 2000 | 0.5445 | 0.8544 | 0.8544 | 0.8544 | 0.8544 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits