modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Kirili4ik/mbart_ruDialogSum | Kirili4ik | 2023-07-03T09:45:51Z | 338 | 25 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"ru",
"license:cc",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
language:
- ru
tags:
- mbart
inference:
parameters:
no_repeat_ngram_size: 4,
num_beams: 5
datasets:
- IlyaGusev/gazeta
- samsum
- samsum_(translated_into_Russian)
widget:
- text: >
Джефф: Могу ли я обучить модель 🤗 Transformers на Amazon SageMaker?
Филипп: Конечно, вы можете использовать новый контейнер для глубокого
обучения HuggingFace.
Джефф: Хорошо.
Джефф: и как я могу начать?
Джефф: где я могу найти документацию?
Филипп: ок, ок, здесь можно найти все:
https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
model-index:
- name: mbart_ruDialogSum
results:
- task:
name: Abstractive Dialogue Summarization
type: abstractive-text-summarization
dataset:
name: SAMSum Corpus (translated to Russian)
type: samsum
metrics:
- name: Validation ROGUE-1
type: rogue-1
value: 34.5
- name: Validation ROGUE-L
type: rogue-l
value: 33
- name: Test ROGUE-1
type: rogue-1
value: 31
- name: Test ROGUE-L
type: rogue-l
value: 28
license: cc
---
### 📝 Description
MBart for Russian summarization fine-tuned for **dialogues** summarization.
This model was firstly fine-tuned by [Ilya Gusev](https://hf.co/IlyaGusev) on [Gazeta dataset](https://huggingface.co/datasets/IlyaGusev/gazeta). We have **fine tuned** that model on [SamSum dataset](https://huggingface.co/datasets/samsum) **translated to Russian** using GoogleTranslateAPI
🤗 Moreover! We have implemented a **! telegram bot [@summarization_bot](https://t.me/summarization_bot) !** with the inference of this model. Add it to the chat and get summaries instead of dozens spam messages! 🤗
### ❓ How to use with code
```python
from transformers import MBartTokenizer, MBartForConditionalGeneration
# Download model and tokenizer
model_name = "Kirili4ik/mbart_ruDialogSum"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = MBartForConditionalGeneration.from_pretrained(model_name)
model.eval()
article_text = "..."
input_ids = tokenizer(
[article_text],
max_length=600,
padding="max_length",
truncation=True,
return_tensors="pt",
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
top_k=0,
num_beams=3,
no_repeat_ngram_size=3
)[0]
summary = tokenizer.decode(output_ids, skip_special_tokens=True)
print(summary)
``` |
KJan05/q-FrozenLake-v1-4x4-noSlippery | KJan05 | 2023-07-03T09:39:59Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T09:39:56Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="KJan05/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
aronmal/Taxi-v3-Qtable | aronmal | 2023-07-03T09:39:02Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T09:39:00Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-Qtable
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="aronmal/Taxi-v3-Qtable", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
aronmal/q-FrozenLake-v1-4x4-noSlippery | aronmal | 2023-07-03T09:37:17Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T09:37:14Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="aronmal/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SRDdev/ScriptForge_Plus | SRDdev | 2023-07-03T09:36:48Z | 132 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-17T05:25:32Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
widget:
- text: 10 Meditation Tips
example_title: Health Example
- text: Cooking red sauce pasta
example_title: Cooking Example
- text: Introduction to Keras
example_title: Technology Example
Tags:
- text-generation
metrics:
- accuracy
---
# ScriptForge_Plus
## 🖊️ Model description
ScriptForge_Plus is a language model trained on a dataset of 5000 YouTube videos that cover different domains of AI.
ScriptForge_Plus is a Causal language transformer. The model resembles the GPT2 architecture, the model is a Causal Language model meaning it predicts the probability of a sequence of words based on the preceding words in the sequence.
It generates a probability distribution over the next word given the previous words, without incorporating future words.
The goal of ScriptForge_Plus is to generate scripts for Youtube videos that are coherent, informative, and engaging.
This can be useful for content creators who are looking for inspiration or who want to automate the process of generating video scripts.
To use ScriptForge_Plus, users can provide a prompt or a starting sentence, and the model will generate a sequence of words that follow the context and style of the training data.
Models
- [ScriptForge_Plus](https://huggingface.co/SRDdev/ScriptForge_Plus) : AI content Model
- [ScriptForge-small](https://huggingface.co/SRDdev/ScriptForge-medium) : Generalized Content Model
More models are coming soon...
## 🛒 Intended uses
The intended uses of ScriptForge_Plus include generating scripts for videos, providing inspiration for content creators, and automating the process of generating video scripts.
## 📝 How to use
You can use this model directly with a pipeline for text generation.
1. __Load Model__
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("SRDdev/ScriptForge_Plus")
model = AutoModelForCausalLM.from_pretrained("SRDdev/ScriptForge_Plus")
```
2. __Pipeline__
```python
from transformers import pipeline
generator = pipeline('text-generation', model= model , tokenizer=tokenizer)
context = "What is Deep Learning"
length_to_generate = 250
script = generator(context, max_length=length_to_generate, do_sample=True)[0]['generated_text']
script
```
<p style="opacity: 0.8">The model may generate random information as it is still in beta version</p>
## 🎈Limitations and bias
> The model is trained on Youtube Scripts and will work better for that. It may also generate random information and users should be aware of that and cross-validate the results.
## Citations
```
@model{
Name=Shreyas Dixit
framework=Pytorch
Year=Jan 2023
Pipeline=text-generation
Github=https://github.com/SRDdev
LinkedIn=https://www.linkedin.com/in/srddev
}
``` |
DucHaiten/DucHaiten-FANCYxFANCY | DucHaiten | 2023-07-03T09:36:13Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T09:31:51Z | ---
license: creativeml-openrail-m
---
|
daiwenbin/xlm-roberta-base-finetuned-panx-de-fr | daiwenbin | 2023-07-03T09:28:37Z | 134 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-03T09:18:25Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2083
- F1: 0.8465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.36 | 1.0 | 715 | 0.2279 | 0.8163 |
| 0.1862 | 2.0 | 1430 | 0.1997 | 0.8363 |
| 0.1169 | 3.0 | 2145 | 0.2083 | 0.8465 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.13.3
|
sarada/t5-small-finetuned-xsum | sarada | 2023-07-03T09:24:54Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-03T09:21:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 61 | 3.0039 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vlkn/falcon_instruct_100 | vlkn | 2023-07-03T09:21:32Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-07-03T09:01:38Z | ---
tags:
- generated_from_trainer
model-index:
- name: falcon_instruct_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_instruct_100
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
joserodr68/q-FrozenLake-v1-4x4-noSlippery | joserodr68 | 2023-07-03T09:10:38Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T09:10:34Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="joserodr68/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NancyAthghara23/red-panda-rpd | NancyAthghara23 | 2023-07-03T08:55:34Z | 10 | 3 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-03T08:52:05Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Red-Panda-rpd Dreambooth model trained by NancyAthghara23 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CVRGU151
Sample pictures of this concept:


|
Soojeong/female_hanbok_1e-7_ckpt_icb | Soojeong | 2023-07-03T08:32:21Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-03T06:33:25Z |
---
license: creativeml-openrail-m
base_model: model/chilloutmix_NiPrunedFp16Fix
instance_prompt: a photo of wearing hanbok
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Soojeong/female_hanbok_1e-7_ckpt_icb
This is a dreambooth model derived from model/chilloutmix_NiPrunedFp16Fix. The weights were trained on a photo of wearing hanbok using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
leomortensen/best-bissell-vacuum | leomortensen | 2023-07-03T08:31:33Z | 0 | 0 | null | [
"region:us"
] | null | 2023-06-02T10:33:31Z | # The 20 Best Bissell Vacuum Cleaners Worth Every Penny In 2023
**In a market flooded with numerous brands of vacuum cleaners, Bissell stands out as a trusted and renowned name in the industry. With a rich history and a commitment to innovation, Bissell has earned its reputation for manufacturing high-quality and reliable vacuum cleaners. Whether you're dealing with pet hair, tackling tough stains, or simply maintaining a clean home, Bissell offers a wide range of vacuum cleaners to meet your specific needs. From powerful uprights to versatile handhelds, Bissell has a solution for every cleaning task. Let's dive in with** [**https://thekinglive.com**](https://thekinglive.com) **and discover why Bissell is the go-to brand for effective and efficient cleaning.**
**Bissell best vacuum cleaner - A famous brand from the USA**
Bissell, a renowned brand of vacuum cleaners originating from the USA, has established itself as one of the best choices on the market. With over 140 years of experience in the industry, Bissell has perfected the art of creating high-performance cleaning solutions for homes.
One of the standout features of Bissell vacuum cleaners is their exceptional cleaning power. Equipped with advanced technologies and strong suction capabilities, Bissell vacuums effectively remove dirt, dust, allergens, and even stubborn pet hair from various surfaces. Whether you have carpets, hardwood floors, or upholstery, Bissell offers models specifically designed to cater to your cleaning needs.
Bissell's commitment to innovation is evident in its range of cutting-edge features. Many Bissell vacuums incorporate specialized brush rolls, HEPA filtration systems, and pet-specific tools to deliver superior cleaning results. They also prioritize user convenience, offering features like easy-empty dirt bins, swivel steering for maneuverability, and cordless options for hassle-free cleaning.
Moreover, Bissell is known for its dedication to sustainability. They have implemented environmentally friendly practices, such as using recycled materials in their products and promoting energy-efficient designs.
When considering a vacuum cleaner, Bissell's stellar reputation, extensive product line, and commitment to quality make it a brand that customers can rely on for exceptional cleaning performance and durability.

[**TOP Rated 20 Best Bissell Vacuum Cleaners Worth Buying In 2023**](https://thekinglive.com/what-is-the-best-rated-bissell-vacuum-cleaner-reviews.html)
In this section, we will explore the top 20 **best Bissell vacuum** cleaners that are worth buying in 2023\. Join us as we unveil the top-rated Bissell vacuum cleaners that are worth every penny in 2023.
Here are the Bissell vacuum reviews of the top 20 for you:
1. Bissell CleanView Swivel Rewind Pet Upright Vacuum Cleaner: Features powerful suction, specialized pet tools, and a swivel steering design for easy maneuverability.
2. Bissell Pet Hair Eraser Turbo Plus Upright Vacuum Cleaner: Equipped with a tangle-free brush roll, powerful suction, and specialized pet tools for effective pet hair removal.
3. Bissell CrossWave All-in-One Multi-Surface Wet Dry Vacuum Cleaner: Offers simultaneous vacuuming and mopping capabilities, suitable for cleaning multiple surfaces in one pass.
4. Bissell PowerEdge Pet Hardwood Floor Bagless Stick Vacuum Cleaner: Designed with a V-shaped nozzle for capturing pet hair and debris along edges and corners.
5. Bissell Iconpet Cordless Stick Vacuum Cleaner: Provides cordless convenience, strong suction power, and specialized pet tools for efficient cleaning.
6. Bissell CleanView Rewind Deluxe Upright Vacuum Cleaner: Features a powerful multi-cyclonic system, automatic cord rewind, and a wide cleaning path for quick and thorough cleaning.
7. Bissell ProHeat 2X Revolution Max Clean Pet Pro Full-Size Carpet Cleaner: Designed for deep cleaning carpets and tackling tough pet stains with HeatWave Technology and specialized pet cleaning tools.
8. Bissell PowerGlide Pet Hair Bagless Vacuum Cleaner: Equipped with strong suction, a tangle-free brush roll, and specialized pet tools for effective pet hair removal.
9. Bissell Featherweight Stick Lightweight Bagless Vacuum Cleaner: Offers lightweight and versatile cleaning with the ability to transform into a handheld vacuum.
10. Bissell MultiClean Wet/Dry Garage and Auto Vacuum Cleaner: Ideal for cleaning both wet and dry messes in garages and automobiles, featuring powerful suction and various attachments.
11. Bissell PowerFresh Steam Mop: It utilizes the power of steam to sanitize and clean hard floors, eliminating 99.9% of germs and bacteria.
12. Bissell Zing Bagged Canister Vacuum Cleaner: Features a lightweight and compact design, powerful suction, and a bagged system for easy disposal.
13. Bissell PowerLifter PowerBrush Upright Carpet Cleaner and Shampooer: Offers a deep-cleaning power brush and a removable nozzle for efficient carpet cleaning and spot treatment.
14. Bissell AirRam Cordless Stick Vacuum Cleaner: Provides cordless convenience, strong suction, and a compact design for easy maneuverability.
15. Bissell SpotClean ProHeat Portable Spot and Stain Carpet Cleaner: Designed for targeted spot cleaning, it features a built-in heater for effective stain removal.
16. Bissell CleanView Connect Robotic Vacuum Cleaner: Offers hands-free cleaning with Wi-Fi connectivity, triple-action cleaning, and a low-profile design for reaching under furniture.
17. Bissell PowerForce Helix Bagless Upright Vacuum Cleaner: Features a helix dirt separation system, strong suction, and a large-capacity dirt bin for efficient cleaning.
18. Bissell Garage Pro Wall-Mounted Wet Dry Car Vacuum: Ideal for cleaning vehicles and garages, it offers powerful suction, a wall-mountable design, and a blower function.
19. Bissell Symphony Vac and Steam 2-in-1 Vacuum and Steam Mop: This appliance combines vacuuming and steam cleaning in one appliance, allowing for simultaneous dirt and germ removal.
20. Bissell SpotBot Pet Hands-Free Spot and Stain Portable Carpet Cleaner: Designed for hands-free spot cleaning with automatic cleaning cycles and specialized pet cleaning solutions.

**Frequently Asked Questions (FAQs)**
**1\. Is Bissell's product safe to use around kids and pets?**
Yes, Bissell products are generally safe to use around kids and pets when used according to the manufacturer's instructions. Bissell understands the importance of creating products that prioritize safety, especially in households with children and pets. However, it is always recommended to exercise caution and follow safety guidelines while using any cleaning products.
Bissell takes various measures to ensure the safety of its products. For example, they use materials that are safe for use in homes and have undergone rigorous testing to meet safety standards. Additionally, Bissell designs its products with features like tangle-free brush rolls, specialized pet tools, and multi-level filtration systems to minimize potential hazards. It's also advisable to keep an eye on pets and children when operating cleaning equipment to prevent accidental injuries.
**2\. How often should you change your Bissell vacuum filter?**
The frequency of changing the filter in your Bissell vacuum will depend on several factors, including the model of your vacuum, the level of usage, and the type of environment you are cleaning. However, as a general guideline, it is recommended to check the condition of your Bissell vacuum filter regularly and replace it as needed.
Some Bissell vacuum models have washable filters that can be cleaned and reused, while others have replaceable filters that need to be replaced when they become visibly dirty or clogged. If you have a washable filter, it is typically recommended to clean it at least once a month, or more frequently if you have pets or if you're vacuuming in a particularly dusty environment.
For replaceable filters, it's a good idea to consult the user manual or the manufacturer's recommendations for your specific Bissell vacuum model. They will provide guidelines on when to replace the filter based on average usage and environmental conditions.
Regularly maintaining and replacing the filter in your Bissell vacuum is important to ensure optimal performance and good indoor air quality. A clean filter helps to capture dust, debris, and allergens effectively, ensuring efficient cleaning and reducing the strain on the vacuum's motor.
**3\. How long does it take for your carpet to get dried after using Bissell?**
The drying time for carpets after using a Bissell carpet cleaner can vary depending on several factors, such as the type of carpet, the level of saturation during cleaning, the ambient humidity, and the airflow in the room. Generally, it can take anywhere from a few hours to a full day for carpets to completely dry.
Based on our Bissell vacuum review, its carpet cleaners are designed to minimize excess water and facilitate quicker drying. They typically employ powerful suction and extraction capabilities to remove as much moisture as possible during the cleaning process. However, the drying time can still be influenced by the factors mentioned above.
To help expedite the drying process, you can take the following steps:
* Ensure proper ventilation: open windows or turn on fans to promote air circulation in the room. This helps evaporate moisture more quickly.
* Use dehumidifiers: If the ambient humidity is high, running a dehumidifier can help remove excess moisture from the air, aiding in faster drying.
* Avoid heavy foot traffic: Limit walking on the carpet while it is still damp to prevent dirt and debris from being tracked in and to avoid potential damage to the fibers.
* Lift furniture: If possible, raise furniture legs or place aluminum foil or plastic film under the legs to prevent them from coming into contact with the damp carpet.
By investing in the best Bissell vacuum, you can enjoy the benefits of powerful suction, advanced cleaning technologies, and specialized attachments that cater to your unique cleaning needs. With a [**Bissell vacuum cleaner**](https://www.tumblr.com/topvacuumcleaners/721457318981664768/bissell-pet-hair-eraser-handheld), you can confidently tackle dirt, pet hair, and other messes, ensuring a cleaner and more comfortable living environment for you and your loved ones. |
Sourabh2/pixelcopter-s3 | Sourabh2 | 2023-07-03T08:26:17Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T08:20:56Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pixelcopter-s3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 32.30 +/- 28.84
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
|
hoanghoavienvo/roberta-base-Dep | hoanghoavienvo | 2023-07-03T08:13:24Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-03T06:42:10Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-Dep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-Dep
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4653
- Accuracy: 0.8333
- F1: 0.8992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 235 | 0.4592 | 0.8217 | 0.8911 |
| No log | 2.0 | 470 | 0.4116 | 0.845 | 0.9086 |
| 0.2907 | 3.0 | 705 | 0.4892 | 0.8133 | 0.8845 |
| 0.2907 | 4.0 | 940 | 0.4532 | 0.835 | 0.9011 |
| 0.2694 | 5.0 | 1175 | 0.4653 | 0.8333 | 0.8992 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jantz/IU-RVC_V2-300_Epochs | jantz | 2023-07-03T08:12:30Z | 0 | 1 | null | [
"region:us"
] | null | 2023-07-03T02:07:14Z | - Dataset: 1 hour of IU songs.
- Vocal Separation: UVR5 model was used to separate vocals. The process involved: Kim Vocal 1 -> Reverb HQ -> Karaoke 2.
- Additional Processing: Noise gate and manual touch-ups were performed in Audacity. |
manmyung/q-Taxi-v3 | manmyung | 2023-07-03T08:02:34Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T08:02:32Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="manmyung/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
manmyung/q-FrozenLake-v1-4x4-noSlippery | manmyung | 2023-07-03T07:53:56Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T07:53:54Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="manmyung/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
daiwenbin/xlm-roberta-base-finetuned-panx-de | daiwenbin | 2023-07-03T07:50:20Z | 122 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-03T07:35:18Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8327865206027916
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1947
- F1: 0.8328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3446 | 1.0 | 525 | 0.2154 | 0.8031 |
| 0.1782 | 2.0 | 1050 | 0.2004 | 0.8228 |
| 0.1128 | 3.0 | 1575 | 0.1947 | 0.8328 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.13.3
|
nomad-ai/poca-SoccerTwos-test | nomad-ai | 2023-07-03T07:37:43Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-07-03T07:37:36Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nomad-ai/poca-SoccerTwos-test
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
aronmal/ppo-Huggy | aronmal | 2023-07-03T07:33:12Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-03T07:33:07Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: aronmal/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
heka-ai/e5-90k | heka-ai | 2023-07-03T07:31:44Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-07-03T07:31:39Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# heka-ai/e5-90k
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('heka-ai/e5-90k')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=heka-ai/e5-90k)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 10000 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 100000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
tlapusan/bert-finetuned-win_file | tlapusan | 2023-07-03T07:28:41Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-30T12:40:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-win_file
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-win_file
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0149
- Precision: 0.9566
- Recall: 0.9935
- F1: 0.9747
- Accuracy: 0.9962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1053 | 1.0 | 938 | 0.0163 | 0.9550 | 0.9931 | 0.9736 | 0.9957 |
| 0.0123 | 2.0 | 1876 | 0.0149 | 0.9566 | 0.9935 | 0.9747 | 0.9962 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vladkolev/distilroberta-base-finetuned-emotion | vladkolev | 2023-07-03T07:27:32Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-03-21T08:29:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilroberta-base-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3438
- Accuracy: 0.9004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.615 | 1.0 | 748 | 0.2832 | 0.9004 |
| 0.2716 | 2.0 | 1496 | 0.2632 | 0.9044 |
| 0.1929 | 3.0 | 2244 | 0.3124 | 0.9071 |
| 0.1559 | 4.0 | 2992 | 0.3258 | 0.8971 |
| 0.1185 | 5.0 | 3740 | 0.3438 | 0.9004 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
NasimB/gpt2-cl-rarity-sampling-5 | NasimB | 2023-07-03T07:01:49Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-03T04:30:07Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-cl-rarity-sampling-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-cl-rarity-sampling-5
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.6015 | 0.05 | 500 | 5.8621 |
| 5.3617 | 0.11 | 1000 | 5.4637 |
| 5.0237 | 0.16 | 1500 | 5.2314 |
| 4.8011 | 0.22 | 2000 | 5.0828 |
| 4.6311 | 0.27 | 2500 | 4.9993 |
| 4.504 | 0.33 | 3000 | 4.9326 |
| 4.3948 | 0.38 | 3500 | 4.8809 |
| 4.2939 | 0.44 | 4000 | 4.8421 |
| 4.2022 | 0.49 | 4500 | 4.8057 |
| 4.1111 | 0.55 | 5000 | 4.7772 |
| 4.0184 | 0.6 | 5500 | 4.7492 |
| 3.9458 | 0.66 | 6000 | 4.7347 |
| 3.8712 | 0.71 | 6500 | 4.7195 |
| 3.8079 | 0.77 | 7000 | 4.7051 |
| 3.7575 | 0.82 | 7500 | 4.6946 |
| 3.716 | 0.88 | 8000 | 4.6904 |
| 3.6978 | 0.93 | 8500 | 4.6861 |
| 3.6899 | 0.99 | 9000 | 4.6848 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
vn0161/autotrain-bhoj-5n53-vq5m-71714138701 | vn0161 | 2023-07-03T07:01:14Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:vn0161/autotrain-data-bhoj-5n53-vq5m",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-03T07:00:26Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- vn0161/autotrain-data-bhoj-5n53-vq5m
co2_eq_emissions:
emissions: 0.37493319480549947
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
- CO2 Emissions (in grams): 0.3749
## Validation Metrics
loss: 0.35270485281944275
f1: 0.8472906403940886
precision: 0.8958333333333334
recall: 0.8037383177570093
auc: 0.9286837278364922
accuracy: 0.8551401869158879
|
SnorreStorjord/whisper-large-no | SnorreStorjord | 2023-07-03T06:55:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"no",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-06-27T11:44:47Z | ---
language:
- no
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- NbAiLab/NPSC
model-index:
- name: Whisper Large NO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large no WIP
This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the NbAiLab/NPSC dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nolanaatama/phngyfrmfvnghtstfrddysrvcv2300pchnlgspdrwb | nolanaatama | 2023-07-03T06:51:29Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T06:37:26Z | ---
license: creativeml-openrail-m
---
|
digiplay/CiderMix_ciderR | digiplay | 2023-07-03T06:32:08Z | 40 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-03T06:22:28Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/20051?modelVersionId=23816
Original Author's DEMO image :

|
Tiru8055/ppo-Pyramids | Tiru8055 | 2023-07-03T06:08:55Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-03T06:08:50Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Tiru8055/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Shubham09/falcon7b-test-updated-policies | Shubham09 | 2023-07-03T05:55:47Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-03T05:55:25Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
google/umt5-base | google | 2023-07-03T05:37:52Z | 1,831 | 13 | transformers | [
"transformers",
"pytorch",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:mc4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-02T01:49:59Z | ---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
license: apache-2.0
---
[Google's UMT5](https://github.com/google-research/multilingual-t5)
UMT5 is pretrained on the an updated version of [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 107 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: UMT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=umt5)
Paper: [UniMax, Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi)
Authors: *by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant*
## Abstract
*Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.* |
google/umt5-xl | google | 2023-07-03T05:37:35Z | 3,435 | 16 | transformers | [
"transformers",
"pytorch",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:mc4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-02T01:51:24Z | ---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
license: apache-2.0
---
[Google's UMT5](https://github.com/google-research/multilingual-t5)
UMT5 is pretrained on the an updated version of [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 107 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: UMT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=umt5)
Paper: [UniMax, Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi)
Authors: *by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant*
## Abstract
*Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.* |
google/umt5-xxl | google | 2023-07-03T05:37:17Z | 286 | 19 | transformers | [
"transformers",
"pytorch",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:mc4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-02T02:15:00Z | ---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
license: apache-2.0
---
[Google's UMT5](https://github.com/google-research/multilingual-t5)
UMT5 is pretrained on the an updated version of [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 107 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: UMT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=umt5)
Paper: [UniMax, Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi)
Authors: *by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant*
## Abstract
*Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.* |
emya/outputs | emya | 2023-07-03T05:29:25Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-02T22:00:13Z |
---
license: creativeml-openrail-m
base_model: outputs
instance_prompt: a logo of a service, named Mcdonald
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - emya/outputs
This is a dreambooth model derived from outputs. The weights were trained on a logo of a service, named Mcdonald using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Valent2809/classifier-model | Valent2809 | 2023-07-03T05:15:49Z | 26 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-03T03:27:44Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Valent2809/classifier-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Valent2809/classifier-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3103
- Validation Loss: 0.4343
- Train Accuracy: 0.8478
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 125, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6413 | 0.5342 | 0.7826 | 0 |
| 0.4819 | 0.4865 | 0.8043 | 1 |
| 0.3806 | 0.4798 | 0.7826 | 2 |
| 0.3400 | 0.4362 | 0.8261 | 3 |
| 0.3009 | 0.4343 | 0.8478 | 4 |
| 0.3103 | 0.4343 | 0.8478 | 5 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gautam1989/mt5-small-finetuned-amazon-en-es | gautam1989 | 2023-07-03T04:54:53Z | 7 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-03T04:00:20Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: gautam1989/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gautam1989/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mT5-small](https://huggingface.co/google/mT5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.2895
- Validation Loss: 3.3954
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.6709 | 4.4471 | 0 |
| 5.9597 | 3.7763 | 1 |
| 5.1538 | 3.6068 | 2 |
| 4.7554 | 3.5175 | 3 |
| 4.4603 | 3.4380 | 4 |
| 4.2895 | 3.3954 | 5 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chriskim2273/IOTNation_CompanyName_Extraction_QA_Model_1.2_Roberta | chriskim2273 | 2023-07-03T04:50:05Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-03T04:13:01Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: IOTNation_CompanyName_Extraction_QA_Model_1.2_Roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_CompanyName_Extraction_QA_Model_1.2_Roberta
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 45 | 0.5443 |
| No log | 2.0 | 90 | 0.6332 |
| No log | 3.0 | 135 | 0.6942 |
| No log | 4.0 | 180 | 0.6725 |
| No log | 5.0 | 225 | 0.7219 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-48 | hopkins | 2023-07-03T04:27:43Z | 65 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T04:09:58Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-48
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9901
- Bleu: 6.8796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-49 | hopkins | 2023-07-03T04:25:49Z | 51 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T04:12:15Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-49
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9908
- Bleu: 7.2223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
deepghs/imgutils-models | deepghs | 2023-07-03T04:12:18Z | 0 | 6 | null | [
"onnx",
"dataset:deepghs/chafen_arknights",
"dataset:deepghs/monochrome_danbooru",
"license:mit",
"region:us"
] | null | 2023-03-11T08:37:38Z | ---
license: mit
datasets:
- deepghs/chafen_arknights
- deepghs/monochrome_danbooru
metrics:
- accuracy
---
# imgutils-models
This repository includes all the models in [deepghs/imgutils](https://github.com/deepghs/imgutils).
## LPIPS
This model is used for clustering anime images (named `差分` in Chinese), based on [richzhang/PerceptualSimilarity](https://github.com/richzhang/PerceptualSimilarity), trained with dataset [deepghs/chafen_arknights(private)](https://huggingface.co/datasets/deepghs/chafen_arknights).
When threshold is `0.45`, the [adjusted rand score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html) can reach `0.995`.
File lists:
* `lpips_diff.onnx`, feature difference.
* `lpips_feature.onnx`, feature extracting.
## Monochrome
These model is used for monochrome image classification, based on CNNs and Transformers, trained with dataset [deepghs/monochrome_danbooru(private)](https://huggingface.co/datasets/deepghs/monochrome_danbooru).
The following are the checkpoints that have been formally put into use, all based on the Caformer architecture:
| Checkpoint | Algorithm | Safe Level | Accuracy | False Negative | False Positive |
|:----------------------------:|:---------:|:----------:|:----------:|:--------------:|:--------------:|
| monochrome-caformer-40 | caformer | 0 | 96.41% | 2.69% | 0.89% |
| **monochrome-caformer-110** | caformer | 0 | **96.97%** | 1.57% | 1.46% |
| monochrome-caformer_safe2-80 | caformer | 2 | 94.84% | **1.12%** | 4.03% |
| monochrome-caformer_safe4-70 | caformer | 4 | 94.28% | **0.67%** | 5.04% |
**`monochrome-caformer-110` has the best overall accuracy** among them, but considering that this model is often used to screen out monochrome images
and we want to screen out as many as possible without omission, we have also introduced weighted models (`safe2` and `safe4`).
Although their overall accuracy has been slightly reduced, the probability of False Negative (misidentifying a monochrome image as a colored one) is lower,
making them more suitable for batch screening.
## Deepdanbooru
`deepdanbooru` is a model used to tag anime images. Here, we provide a table for tag classification called `deepdanbooru_tags.csv`,
as well as an ONNX model (from [chinoll/deepdanbooru](https://huggingface.co/spaces/SmilingWolf/wd-v1-4-tags)).
It's worth noting that due to the poor quality of the deepdanbooru model itself and the relatively old dataset,
it is only for testing purposes and is not recommended to be used as the main classification model. We recommend using the `wd14` model instead, see:
* https://huggingface.co/spaces/SmilingWolf/wd-v1-4-tags
|
anurag629/ppo-LunarLander-v2 | anurag629 | 2023-07-03T04:08:06Z | 1 | 1 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T04:07:47Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.10 +/- 12.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vineetsharma/whisper-base-finetuned-gtzan | vineetsharma | 2023-07-03T04:03:36Z | 49 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-03T01:16:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: whisper-base-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-finetuned-gtzan
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6867
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9075 | 1.0 | 57 | 1.0000 | 0.58 |
| 0.4569 | 2.0 | 114 | 0.6073 | 0.83 |
| 0.3761 | 3.0 | 171 | 0.6410 | 0.8 |
| 0.3049 | 4.0 | 228 | 0.4536 | 0.86 |
| 0.0284 | 5.0 | 285 | 0.5120 | 0.85 |
| 0.0165 | 6.0 | 342 | 0.4856 | 0.89 |
| 0.0087 | 7.0 | 399 | 0.6814 | 0.87 |
| 0.0038 | 8.0 | 456 | 0.7059 | 0.85 |
| 0.0032 | 9.0 | 513 | 0.6831 | 0.87 |
| 0.0034 | 10.0 | 570 | 0.6867 | 0.87 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3 |
hopkins/mbart-finetuned-eng-deu-47 | hopkins | 2023-07-03T03:40:49Z | 43 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:22:38Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-47
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6483
- Bleu: 20.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-44 | hopkins | 2023-07-03T03:32:33Z | 55 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:14:52Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-44
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9949
- Bleu: 6.8417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
alphaduriendur/ner-deBERTa-v2-x-large | alphaduriendur | 2023-07-03T03:27:58Z | 56 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-03T01:44:37Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner-deBERTa-v2-x-large
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: test
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.7384370015948963
- name: Recall
type: recall
value: 0.7377832861189801
- name: F1
type: f1
value: 0.7381099991143388
- name: Accuracy
type: accuracy
value: 0.9460966943038657
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-deBERTa-v2-x-large
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3963
- Precision: 0.7384
- Recall: 0.7378
- F1: 0.7381
- Accuracy: 0.9461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 219 | 0.4082 | 0.6932 | 0.7087 | 0.7009 | 0.9386 |
| No log | 2.0 | 439 | 0.4299 | 0.7467 | 0.6948 | 0.7198 | 0.9426 |
| 0.0094 | 3.0 | 658 | 0.4086 | 0.7435 | 0.7072 | 0.7249 | 0.9441 |
| 0.0094 | 4.0 | 878 | 0.3873 | 0.7426 | 0.7420 | 0.7423 | 0.9461 |
| 0.0054 | 4.99 | 1095 | 0.3963 | 0.7384 | 0.7378 | 0.7381 | 0.9461 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-45 | hopkins | 2023-07-03T03:16:25Z | 53 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:58:21Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-45
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7646
- Bleu: 21.8949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
samzoozi/atari_game | samzoozi | 2023-07-03T03:04:22Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T03:03:41Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 718.00 +/- 220.55
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga samzoozi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga samzoozi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga samzoozi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jinlee74/distilbert-base-uncased-finetuned-emotions | jinlee74 | 2023-07-03T02:59:35Z | 55 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-03T00:11:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotions
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9415
- name: F1
type: f1
value: 0.9416116671925132
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2357
- Accuracy: 0.9415
- F1: 0.9416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.016 | 1.0 | 250 | 0.2262 | 0.9405 | 0.9404 |
| 0.011 | 2.0 | 500 | 0.2357 | 0.9415 | 0.9416 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.1.0.dev20230316
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AshtakaOOf/ssambatea-locon | AshtakaOOf | 2023-07-03T02:58:58Z | 0 | 1 | null | [
"Text-to-Image",
"anime",
"lora",
"locon",
"lycoris",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-07-03T01:36:57Z | ---
license: cc-by-nc-sa-4.0
tags:
- Text-to-Image
- anime
- lora
- locon
- lycoris
---
# SSAMBAtea Style LoCon

## token: **ssambatea**
Trained on SSAMBAtea artwork
This is a LoCon and require the LyCORIS extension to work
I am planning on making a new improved dataset to do a V2
# License
[CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) |
hopkins/mbart-finetuned-eng-ind-42 | hopkins | 2023-07-03T02:57:04Z | 60 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:39:13Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-42
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7642
- Bleu: 21.7118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
alibaba-pai/pai-diffusion-artist-large-zh | alibaba-pai | 2023-07-03T02:56:37Z | 14 | 7 | diffusers | [
"diffusers",
"pytorch",
"text-to-image",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-04-03T09:38:48Z | ---
license: apache-2.0
tags:
- pytorch
- diffusers
- text-to-image
---
# Chinese Diffusion Model (Artist, 512 Resolution)
## 简介 Brief Introduction
我们开源了一个中文 Diffusion 模型,您可以直接输入中文提示词,我们为您呈现精美的艺术风格图片。本模型的默认分辨率是 512*512。
We release a Chinese diffusion model, which is able to generate high-quality artistic images according to the prompts you input. The default resolution of this model is 512*512.
* Github: [EasyNLP](https://github.com/alibaba/EasyNLP)
## 使用 Usage
本模型支持 `diffusers`,可以参考以下范例代码:
This model supports `diffusers`. Please refer to the following code:
```python
from diffusers import StableDiffusionPipeline
model_id = "alibaba-pai/pai-diffusion-artist-large-zh"
pipe = StableDiffusionPipeline.from_pretrained(model_id)
pipe = pipe.to("cuda")
prompt = "雾蒙蒙的日出在湖面上"
image = pipe(prompt).images[0]
image.save("result.png")
```
## 作品展示 Gallery
| prompt: 浮岛,天空,白云,城堡,幻想世界 | prompt: 红白玫瑰花,很多花瓣,绽放 |
| ---------------------------------------- | ---------------------------------- |
| negative_prompt: 油画,模糊,雾蒙蒙 | negative_prompt: |
|  |  |
| prompt: 亭台楼阁,曲径通幽,水墨绘画,中国风 | prompt: 阳光,彩虹,小白马 |
| -------------------------------------------- | -------------------------- |
| negative_prompt: 油画,彩色 | negative_prompt: |
|  |  |
## 使用须知 Notice for Use
使用上述模型需遵守[AIGC模型开源特别条款](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html)。
If you want to use this model, please read this [document](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html) carefully and abide by the terms.
|
hopkins/mbart-finetuned-eng-deu-44 | hopkins | 2023-07-03T02:56:05Z | 67 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:37:53Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-44
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6513
- Bleu: 20.8990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
digiplay/Landscape_PhotoReal_v1 | digiplay | 2023-07-03T02:53:33Z | 620 | 7 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-03T02:20:00Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/71987/landscapephotoreal?modelVersionId=76750
Sample images and prompt :
magnificent scenery, wide landscape, sharp and crisp background, very beautiful landscape, old ruins buildings, fantasy, birdview, best quality, masterpiece, ultra high res, dark blue light, cloudy, photo, photorealistic, wide view, kkw-ph1


photorealistic modern living room, sharp and crisp background, sofa, low table, bookshelf, parks and buildings from window, wood and flower, beautiful landscape, best quality, masterpiece, hires, in the morning light, detailed lighting, blue sky, (((photo))), (((photorealistic))) ,kkw-ph1, wide shot, web meeting background

|
hopkins/mbart-finetuned-eng-kor-41 | hopkins | 2023-07-03T02:39:04Z | 63 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:25:38Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-41
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-41
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9881
- Bleu: 7.0463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-42 | hopkins | 2023-07-03T02:38:45Z | 57 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:24:45Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-42
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6513
- Bleu: 20.8783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-40 | hopkins | 2023-07-03T02:37:25Z | 74 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:19:49Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-40
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9919
- Bleu: 7.0359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-39 | hopkins | 2023-07-03T02:31:10Z | 53 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:13:29Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-39
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9925
- Bleu: 6.7954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
djifg/grow_classification_xlmr2 | djifg | 2023-07-03T02:28:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-03T01:59:42Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: grow_classification_xlmr2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# grow_classification_xlmr2
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5585
- Accuracy: 0.9309
- F1: 0.9297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2832 | 1.0 | 436 | 0.4686 | 0.8870 | 0.8872 |
| 0.0717 | 2.0 | 872 | 0.5915 | 0.8964 | 0.8950 |
| 0.0374 | 3.0 | 1308 | 0.4898 | 0.9276 | 0.9266 |
| 0.0205 | 4.0 | 1744 | 0.5333 | 0.9271 | 0.9257 |
| 0.0101 | 5.0 | 2180 | 0.5585 | 0.9309 | 0.9297 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jncraton/fastchat-t5-3b-v1.0-ct2-int8 | jncraton | 2023-07-03T02:24:58Z | 3 | 2 | transformers | [
"transformers",
"license:apache-2.0",
"region:us"
] | null | 2023-07-03T01:59:59Z | ---
license: apache-2.0
inference: false
---
# FastChat-T5 Model Card
## Model details
**Model type:**
FastChat-T5 is an open-source chatbot trained by fine-tuning Flan-t5-xl (3B parameters) on user-shared conversations collected from ShareGPT.
It is based on an encoder-decoder transformer architecture, and can autoregressively generate responses to users' inputs.
**Model date:**
FastChat-T5 was trained on April 2023.
**Organizations developing the model:**
The FastChat developers, primarily Dacheng Li, Lianmin Zheng and Hao Zhang.
**Paper or resources for more information:**
https://github.com/lm-sys/FastChat#FastChat-T5
**License:**
Apache License 2.0
**Where to send questions or comments about the model:**
https://github.com/lm-sys/FastChat/issues
## Intended use
**Primary intended uses:**
The primary use of FastChat-T5 is the commercial usage of large language models and chatbots. It can also be used for research purposes.
**Primary intended users:**
The primary intended users of the model are entrepreneurs and researchers in natural language processing, machine learning, and artificial intelligence.
## Training dataset
70K conversations collected from ShareGPT.com.
## Training details
It processes the ShareGPT data in the form of question answering. Each ChatGPT response is processed as an answer, and previous conversations between the user and the ChatGPT are processed as the question.
The encoder bi-directionally encodes a question into a hidden representation. The decoder uses cross-attention to attend to this representation while generating an answer uni-directionally from a start token.
This model is fine-tuned for 3 epochs, with a max learning rate 2e-5, warmup ratio 0.03, and a cosine learning rate schedule.
## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.
|
AhmedTaha012/gptneo-TxtToJson-v0.1.16 | AhmedTaha012 | 2023-07-03T02:16:00Z | 79 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-03T01:43:59Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gptneo-TxtToJson-v0.1.16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gptneo-TxtToJson-v0.1.16
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1180
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 88 | 0.6397 |
| No log | 2.0 | 176 | 0.5158 |
| No log | 3.0 | 264 | 0.4083 |
| No log | 4.0 | 352 | 0.2929 |
| No log | 5.0 | 440 | 0.2384 |
| 0.3687 | 6.0 | 528 | 0.1904 |
| 0.3687 | 7.0 | 616 | 0.1638 |
| 0.3687 | 8.0 | 704 | 0.1485 |
| 0.3687 | 9.0 | 792 | 0.1405 |
| 0.3687 | 10.0 | 880 | 0.1277 |
| 0.3687 | 11.0 | 968 | 0.1232 |
| 0.0629 | 12.0 | 1056 | 0.1291 |
| 0.0629 | 13.0 | 1144 | 0.1159 |
| 0.0629 | 14.0 | 1232 | 0.1123 |
| 0.0629 | 15.0 | 1320 | 0.1160 |
| 0.0629 | 16.0 | 1408 | 0.1159 |
| 0.0629 | 17.0 | 1496 | 0.1195 |
| 0.0137 | 18.0 | 1584 | 0.1186 |
| 0.0137 | 19.0 | 1672 | 0.1179 |
| 0.0137 | 20.0 | 1760 | 0.1180 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
stephansf/taxi | stephansf | 2023-07-03T02:15:53Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T02:15:48Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="stephansf/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hopkins/mbart-finetuned-eng-deu-41 | hopkins | 2023-07-03T02:06:54Z | 63 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:48:43Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-41
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-41
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6499
- Bleu: 21.0780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-38 | hopkins | 2023-07-03T02:06:04Z | 65 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:52:19Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-38
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-38
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7718
- Bleu: 21.7535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
digiplay/CityEdge_StyleMix_v1.44 | digiplay | 2023-07-03T02:03:34Z | 310 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-03T01:27:43Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/63243/cityedgestylemix
Sample images and prompt :
1girl, solo, long hair blown by wind,close-up ,long dress, green eyes, white stocking, lace, look at viewer, luxurious, elegant, extremely detailed, majestic, blurry, blurry background, tree, branch, cherry blossoms, butterfly, flower petals blown by wind, depth of field,

8k Angel sky,best quality , masterpiece, close up, ultra detailed ,upper body


|
Soojeong/female_hanbok_1e-7_ckpt | Soojeong | 2023-07-03T02:02:54Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-02T23:52:07Z |
---
license: creativeml-openrail-m
base_model: model/chilloutmix_NiPrunedFp16Fix
instance_prompt: a photo of wearing hanbok
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Soojeong/female_hanbok_1e-7_ckpt
This is a dreambooth model derived from model/chilloutmix_NiPrunedFp16Fix. The weights were trained on a photo of wearing hanbok using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
hopkins/mbart-finetuned-eng-deu-38 | hopkins | 2023-07-03T01:51:51Z | 63 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:33:49Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-38
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-38
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6498
- Bleu: 20.8314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Huggingfly/q-FrozenLake-v1-4x4-noSlippery | Huggingfly | 2023-07-03T01:40:04Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T01:40:00Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Huggingfly/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hopkins/mbart-finetuned-eng-kor-35 | hopkins | 2023-07-03T01:35:52Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:18:24Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-35
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9930
- Bleu: 6.7448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-34 | hopkins | 2023-07-03T01:33:22Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:15:53Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-34
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-34
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9937
- Bleu: 7.1397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ankitvyas/myBloomLoraModel | ankitvyas | 2023-07-03T01:31:54Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-03T01:19:55Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Ngadou/bert-sms-spam-dectector | Ngadou | 2023-07-03T01:29:26Z | 111 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"en",
"dataset:Ngadou/Spam_SMS",
"doi:10.57967/hf/0746",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-10T21:24:39Z | ---
license: cc-by-4.0
datasets:
- Ngadou/Spam_SMS
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
--- |
hopkins/mbart-finetuned-eng-ind-35 | hopkins | 2023-07-03T01:17:55Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:00:12Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-35
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7681
- Bleu: 21.8412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
SATOU0ZHU/qteamixQ_omegaFp16 | SATOU0ZHU | 2023-07-03T01:17:22Z | 30 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-02T16:02:18Z | Diffusers version of QTea Model |
hopkins/mbart-finetuned-eng-deu-37 | hopkins | 2023-07-03T01:11:58Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T00:53:43Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-37
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6509
- Bleu: 20.9509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-33 | hopkins | 2023-07-03T00:53:13Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T00:39:30Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-33
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9920
- Bleu: 6.7823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dereklvlv/tsh-chp-150 | dereklvlv | 2023-07-03T00:50:51Z | 4 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-03T00:45:07Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
hopkins/mbart-finetuned-eng-kor-32 | hopkins | 2023-07-03T00:47:13Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T00:33:42Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-32
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9944
- Bleu: 7.0531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
FPHam/Rachel_Assistant_Editor_13b_GPTQ | FPHam | 2023-07-03T00:34:14Z | 6 | 5 | transformers | [
"transformers",
"llama",
"text-generation",
"LLM",
"writing",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-02T23:27:48Z | ---
tags:
- LLM
- llama
- writing
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/eNZ7P8D.png" alt="FPHam's Harper" style="width: 50%; min-width: 300px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Rachel Ko-fi (I'll give her a sip)</a></p>
</div>
<!-- header end -->
## Rachel is an assistant editor for your fiction. (v.2)
This is a retraining of [Harper Assistant Editor](https://huggingface.co/FPHam/Harper_AssistantEditor_V1_13b_GPTQ) (who has been fired - she was just too young and inexperienced to do this job properly)
Meet Rachel. She's a senior Assistant Editor at the famous FP Ham publishing company.
**IMPORTANT NOTE: Rachel uses ALPACA instruct**
If you want to get in touch with Rachel, a decent approach is to ask her to
```
### Instruction: Rewrite the following text: <the text>
### Response:
```
Text should be a short paragraph at a time, few sentences. If you write long paragraphs, Rachel might feel the urge to shorten them.
You see, her attention span is like that of a goldfish.
You can give her specific instructions like:
```
Rewrite the following text using aristocratic British English,
also known as 'Receiver Pronunciation' (RP) or 'BBC English':
```
Oh, and just a heads up, if you ask her to use profanity, well, the outcome is all on you because she is very eager to comply.
Don't be surprised if Rachel jumps at the chance to correct your words, even if you thought they were perfectly fine.
She's just that eager to make her mark, you know?
## Examples
Here are some examples. The format was as just as above, an instruction to rewrite text follwing with the input text itself.
The input text was (of course):
```
It is a truth universally acknowledged, that a single man in possession
of a good fortune must be in want of a wife.
However little known the feelings or views of such a man may be on his
first entering a neighbourhood, this truth is so well fixed in the minds
of the surrounding families, that he is considered as the rightful property
of some one or other of their daughters.
```
Rewrite the following text in a modern, easy-to-read language
```
Everyone knows that if a guy has money and isn't married,
people will think he needs to find a wife.
So it's assumed that whoever owns all that
cash should hook up with someone from the neighborhood.
```
Rewrite the following text the way Zapp Brannigan from Futurama talks
```
Y'know, it's a fact, sonny, that any fella with a bit o' money to spare
has gotta find himself a missus.
They all know the score - ain't no secret that when a new face rolls into town,
there's gonna be some lucky lady claimin' him for her own.
Ain't nothin' like it in their heads, mind you, but they don't need no fancy
talkers to figure out who gets dibs on 'em.
```
Rewrite the following text reversing the genders
```
It is a truth universally acknowledged, that a single woman in
possession of a good fortune must be in want of a husband.
However little known the feelings or views of such a woman may
be on her first entering a neighbourhood, this truth is so well fixed
in the minds of the surrounding families, that she is considered as
the rightful property of some one or other of their sons.
```
Rewrite the following text in Czech
```
Je pravda, kterou uznávají všichni, že jednotlivec vlastnící nějaké bohatství musí
potřebovat ženu.
Ačkoli pocity nebo názory takového muže mohou být málo známé okolním rodinám,
když se poprvé setká s blízkým okolím, je tato pravda tak pevně zakořeněná
ve mysli sousedních rodin, že je považován za majetek právem jedné ze jejich dcer.
```
Rewrite the following text using profanity, obscenity, and vulgar language
```
(okay, try it for yuourself if you must)
```
## Technicalities
Based on Meta LLaMA
Trained using ALPACA Instruct :
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: Rewrite the following text: <the text>
### Response:
```
You can have a conversation with Rachel about your text of course.
Params.
I am only happy if you experiment with the parameters. The parameters I used to test:
Temperature: 0.7
```
top_p: 0.9
top_k: 20
repetition penalty: 1.15
``` |
hopkins/mbart-finetuned-eng-ind-32 | hopkins | 2023-07-03T00:33:11Z | 121 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T00:19:33Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-32
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7674
- Bleu: 21.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
FPHam/Harper_AssistantEditor_V1_13b_GPTQ | FPHam | 2023-07-03T00:33:01Z | 11 | 7 | transformers | [
"transformers",
"llama",
"text-generation",
"LLama",
"writing",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-27T18:00:59Z | ---
language:
- en
tags:
- LLama
- writing
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/Kd0Vpem.png" alt="FPHam's Harper" style="width: 30%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Harper Ko-fi (I'll give her a sip)</a></p>
</div>
<!-- header end -->
## Harper is an assistant editor for your fiction. (v.1)
Meet Harper. She's a young Assistant Editor at the famous FP Ham publishing company.
This is test version 1
Note: Harper was fired and replaced with Rachel https://huggingface.co/FPHam/Rachel_Assistant_Editor_13b_GPTQ
Harper v.1 uses Vicuna format:
```
User: Rewrite the following text: <the text>
Assistant:
```
Text should be a short paragraph at a time, few sentences. If you write long paragraphs, Harper might feel the urge to shorten them.
You see, her attention span is like that of the new generation or a goldfish. |
hopkins/mbart-finetuned-eng-ind-31 | hopkins | 2023-07-03T00:31:21Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T00:17:51Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-31
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-31
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7681
- Bleu: 21.8896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
samzoozi/q-FrozenLake-v1-4x4-noSlippery | samzoozi | 2023-07-03T00:24:06Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T00:24:03Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="samzoozi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hopkins/mbart-finetuned-eng-deu-32 | hopkins | 2023-07-03T00:19:06Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T00:05:09Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-32
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6494
- Bleu: 21.1716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-29 | hopkins | 2023-07-03T00:10:34Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-02T23:56:59Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-29
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9934
- Bleu: 7.0740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-28 | hopkins | 2023-07-03T00:04:40Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-02T23:50:40Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-28
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9908
- Bleu: 6.8812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
digiplay/Noosphere_v3 | digiplay | 2023-07-03T00:00:25Z | 285 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-02T20:01:58Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info: highly recommend 👍👍👍👍👍 useful text-to-image model 😉👍
https://civitai.com/models/36538?modelVersionId=107675
Sample image I made :



Original Author's DEMO images :


|
hopkins/mbart-finetuned-eng-kor-26 | hopkins | 2023-07-03T00:00:17Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-02T23:44:37Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-26
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9925
- Bleu: 6.9558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-26 | hopkins | 2023-07-02T23:44:09Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-02T23:30:41Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-26
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7671
- Bleu: 22.0145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-27 | hopkins | 2023-07-02T23:34:12Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-02T23:20:13Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-27
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6502
- Bleu: 21.0549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AhmedTaha012/gptneo-TxtToJson-v0.1.10 | AhmedTaha012 | 2023-07-02T23:31:21Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-02T22:54:01Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gptneo-TxtToJson-v0.1.10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gptneo-TxtToJson-v0.1.10
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 414 | 0.8261 |
| 1.2433 | 2.0 | 828 | 0.4342 |
| 0.5005 | 3.0 | 1242 | 0.2863 |
| 0.2502 | 4.0 | 1656 | 0.2305 |
| 0.1616 | 5.0 | 2070 | 0.2115 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
hopkins/mbart-finetuned-eng-kor-24 | hopkins | 2023-07-02T23:25:30Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-02T23:07:23Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-24
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9896
- Bleu: 7.0455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-25 | hopkins | 2023-07-02T23:21:35Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-02T23:03:50Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-25
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9949
- Bleu: 6.9771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LucasDash/dash-sd-v15 | LucasDash | 2023-07-02T23:17:58Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T22:12:10Z | ---
license: creativeml-openrail-m
---
# Dash SD 1.5 Experimetal Models

|
squeeze-ai-lab/sq-vicuna-13b-w4-s45 | squeeze-ai-lab | 2023-07-02T23:17:14Z | 0 | 0 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-06-26T19:19:33Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized Vicuna-13B-v1.1 model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [Vicuna-13B-v1.1](https://huggingface.co/lmsys/vicuna-13b-delta-v1.1) (by [LMSYS](https://lmsys.org/))
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0.45%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-vicuna-13b-w3-s0 | squeeze-ai-lab | 2023-07-02T23:16:13Z | 0 | 2 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-06-15T02:13:11Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit quantized Vicuna-13B-v1.1 model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [Vicuna-13B-v1.1](https://huggingface.co/lmsys/vicuna-13b-delta-v1.1) (by [LMSYS](https://lmsys.org/))
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-vicuna-7b-w4-s45 | squeeze-ai-lab | 2023-07-02T23:15:54Z | 0 | 0 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-06-26T19:19:08Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized Vicuna-7B-v1.1 model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [Vicuna-7B-v1.1](https://huggingface.co/lmsys/vicuna-7b-delta-v1.1) (by [LMSYS](https://lmsys.org/))
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0.45%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-vicuna-7b-w3-s45 | squeeze-ai-lab | 2023-07-02T23:15:24Z | 0 | 0 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-06-26T19:19:22Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit quantized Vicuna-7B-v1.1 model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [Vicuna-7B-v1.1](https://huggingface.co/lmsys/vicuna-7b-delta-v1.1) (by [LMSYS](https://lmsys.org/))
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0.45%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-vicuna-7b-w4-s0 | squeeze-ai-lab | 2023-07-02T23:13:27Z | 0 | 1 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-06-15T02:13:28Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized Vicuna-7B-v1.1 model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [Vicuna-7B-v1.1](https://huggingface.co/lmsys/vicuna-7b-delta-v1.1) (by [LMSYS](https://lmsys.org/))
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
hopkins/mbart-finetuned-eng-ind-24 | hopkins | 2023-07-02T23:06:53Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-02T22:48:41Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-24
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7644
- Bleu: 22.0678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits