modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 18:27:38
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 18:27:10
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
HusseinHE/icbinh | HusseinHE | 2023-09-21T20:12:39Z | 16 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-21T20:07:02Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ICBINH Dreambooth model trained by HusseinHE with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
fsuarez/autotrain-image-classification-86974143294 | fsuarez | 2023-09-21T19:47:43Z | 194 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:fsuarez/autotrain-data-image-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-04T14:31:09Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- fsuarez/autotrain-data-image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 1.6524320573590656
---
## 📒 image-classification-model
This model has undergone training on the "image-classification" dataset, focusing on multi-class classification to categorize specific segments of websites. Each segment corresponds to one of six potential features, encompassing a broad spectrum of web elements, including:
- **Button**: Identifying interactive buttons that users can click or tap on for various website functions.
- **Textfield**: Recognizing text input fields where users can type or enter information.
- **Checkbox**: Detecting checkboxes that users can select or deselect to make choices or indicate preferences.
- **Radiobutton**: Identifying radio buttons that allow users to choose a single option from a list.
- **Tables**: Recognizing tabular data structures that organize information in rows and columns.
- **AppBar**: Detecting app bars or navigation bars typically found at the top of web pages, often containing menus, search bars, or branding elements.
This extensive training equips the model with the ability to accurately classify these web elements.
# 🧪 Dataset Content
The dataset is structured to facilitate the analysis of website components. It includes various types of objects commonly found on websites, such as buttons, text fields, checkboxes, radio buttons, tables, and app bars. Each object type is organized into its respective category within the dataset, allowing for precise classification.
| Web Element Category | Quantity of images |
|----------------------|--------------------|
| Button | 2934 |
| Textfield | 100 |
| Checkbox | 422 |
| Radiobutton | 466 |
| Tables | 100 |
| AppBar | 100 |
# 🤗 Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 86974143294
- CO2 Emissions (in grams): 1.6524
## 📐 Validation Metrics
- Loss: 0.079
- Accuracy: 0.983
- Macro F1: 0.967
- Micro F1: 0.983
- Weighted F1: 0.983
- Macro Precision: 0.971
- Micro Precision: 0.983
- Weighted Precision: 0.983
- Macro Recall: 0.964
- Micro Recall: 0.983
- Weighted Recall: 0.983 |
Bogdan63/distilbert-imdb | Bogdan63 | 2023-09-21T19:45:05Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-10T13:46:45Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|
| No log | 1.0 | 313 | 0.2279 | 0.9112 | 0.9112 | 0.9112 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Logeswaransr/T5_MineAI_Prototype | Logeswaransr | 2023-09-21T19:43:09Z | 103 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-21T19:37:52Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: results_T5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_T5
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8755
- Rouge1: 0.2921
- Rouge2: 0.1519
- Rougel: 0.2857
- Rougelsum: 0.2866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 109 | 0.4682 | 0.2349 | 0.0878 | 0.2327 | 0.2344 |
| No log | 2.0 | 218 | 0.4153 | 0.2519 | 0.0965 | 0.2481 | 0.2503 |
| No log | 3.0 | 327 | 0.4102 | 0.3011 | 0.1465 | 0.2979 | 0.2990 |
| No log | 4.0 | 436 | 0.4386 | 0.2555 | 0.1138 | 0.2496 | 0.2496 |
| 0.8199 | 5.0 | 545 | 0.4784 | 0.2725 | 0.1188 | 0.2675 | 0.2665 |
| 0.8199 | 6.0 | 654 | 0.5088 | 0.2524 | 0.1066 | 0.2497 | 0.2501 |
| 0.8199 | 7.0 | 763 | 0.5680 | 0.2542 | 0.1093 | 0.2497 | 0.2496 |
| 0.8199 | 8.0 | 872 | 0.5982 | 0.2740 | 0.1375 | 0.2694 | 0.2698 |
| 0.8199 | 9.0 | 981 | 0.6575 | 0.2730 | 0.1368 | 0.2723 | 0.2714 |
| 0.0653 | 10.0 | 1090 | 0.6753 | 0.2822 | 0.1519 | 0.2798 | 0.2781 |
| 0.0653 | 11.0 | 1199 | 0.6923 | 0.2795 | 0.1486 | 0.2780 | 0.2774 |
| 0.0653 | 12.0 | 1308 | 0.7350 | 0.2471 | 0.1209 | 0.2458 | 0.2457 |
| 0.0653 | 13.0 | 1417 | 0.7698 | 0.2762 | 0.1463 | 0.2720 | 0.2733 |
| 0.0225 | 14.0 | 1526 | 0.7867 | 0.2771 | 0.1372 | 0.2763 | 0.2755 |
| 0.0225 | 15.0 | 1635 | 0.8166 | 0.3166 | 0.1689 | 0.3132 | 0.3133 |
| 0.0225 | 16.0 | 1744 | 0.8085 | 0.3027 | 0.1572 | 0.2998 | 0.3009 |
| 0.0225 | 17.0 | 1853 | 0.8162 | 0.3090 | 0.1734 | 0.3025 | 0.3038 |
| 0.0225 | 18.0 | 1962 | 0.8484 | 0.2965 | 0.1627 | 0.2917 | 0.2909 |
| 0.0105 | 19.0 | 2071 | 0.8610 | 0.2881 | 0.1487 | 0.2813 | 0.2819 |
| 0.0105 | 20.0 | 2180 | 0.8688 | 0.2811 | 0.1494 | 0.2755 | 0.2770 |
| 0.0105 | 21.0 | 2289 | 0.8733 | 0.2777 | 0.1453 | 0.2708 | 0.2724 |
| 0.0105 | 22.0 | 2398 | 0.8776 | 0.2771 | 0.1475 | 0.2709 | 0.2711 |
| 0.0061 | 23.0 | 2507 | 0.8717 | 0.2829 | 0.1467 | 0.2749 | 0.2749 |
| 0.0061 | 24.0 | 2616 | 0.8729 | 0.2878 | 0.1467 | 0.2803 | 0.2806 |
| 0.0061 | 25.0 | 2725 | 0.8755 | 0.2921 | 0.1519 | 0.2857 | 0.2866 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jgeselowitz/poem_labeler | jgeselowitz | 2023-09-21T19:43:00Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-21T19:24:46Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: poem_labeler
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem_labeler
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1318
- Accuracy: 0.7315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0602 | 1.0 | 2382 | 0.9264 | 0.6935 |
| 0.5889 | 2.0 | 4764 | 0.9186 | 0.723 |
| 0.2638 | 3.0 | 7146 | 1.1318 | 0.7315 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
glaiveai/glaive-coder-7b | glaiveai | 2023-09-21T19:35:50Z | 1,565 | 54 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"en",
"dataset:glaiveai/glaive-code-assistant",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-17T14:49:44Z | ---
license: llama2
datasets:
- glaiveai/glaive-code-assistant
language:
- en
tags:
- code
---
# Glaive-coder-7b
Glaive-coder-7b is a 7B parameter code model trained on a dataset of ~140k programming related problems and solutions generated from Glaive’s synthetic data generation platform.
The model is fine-tuned on the CodeLlama-7b model.
## Usage:
The model is trained to act as a code assistant, and can do both single instruction following and multi-turn conversations.
It follows the same prompt format as CodeLlama-7b-Instruct-
```
<s>[INST]
<<SYS>>
{{ system_prompt }}
<</SYS>>
{{ user_msg }} [/INST] {{ model_answer }} </s>
<s>[INST] {{ user_msg }} [/INST]
```
You can run the model in the following way-
```python
from transformers import AutoModelForCausalLM , AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("glaiveai/glaive-coder-7b")
model = AutoModelForCausalLM.from_pretrained("glaiveai/glaive-coder-7b").half().cuda()
def fmt_prompt(prompt):
return f"<s> [INST] {prompt} [/INST]"
inputs = tokenizer(fmt_prompt(prompt),return_tensors="pt").to(model.device)
outputs = model.generate(**inputs,do_sample=True,temperature=0.1,top_p=0.95,max_new_tokens=100)
print(tokenizer.decode(outputs[0],skip_special_tokens=True,clean_up_tokenization_spaces=False))
```
## Benchmarks:
The model achieves a 63.1% pass@1 on HumanEval and a 45.2% pass@1 on MBPP, however it is evident that these benchmarks are not representative of real-world usage of code models so we are launching the [Code Models Arena](https://arena.glaive.ai/) to let users vote on model outputs so we can have a better understanding of user preference on code models and come up with new and better benchmarks. We plan to release the Arena results as soon as we have a sufficient amount of data.
Join the Glaive [discord](https://discord.gg/fjQ4uf3yWD) for improvement suggestions, bug-reports and collaborating on more open-source projects. |
mychen76/donut-receipt_v3 | mychen76 | 2023-09-21T19:29:49Z | 66 | 0 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2023-09-20T17:01:53Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
model-index:
- name: donut-receipt_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-receipt_v3
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Tanor/SRGPTSENTPOS6 | Tanor | 2023-09-21T19:28:12Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:Tanor/SRGPTSENTPOS6",
"base_model:finetune:Tanor/SRGPTSENTPOS6",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T17:55:14Z | ---
base_model: Tanor/SRGPTSENTPOS6
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: SRGPTSENTPOS6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SRGPTSENTPOS6
This model is a fine-tuned version of [Tanor/SRGPTSENTPOS6](https://huggingface.co/Tanor/SRGPTSENTPOS6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3376
- F1: 0.1538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0145 | 1.0 | 2706 | 0.4809 | 0.2526 |
| 0.0052 | 2.0 | 5412 | 0.3511 | 0.1562 |
| 0.0136 | 3.0 | 8118 | 0.3620 | 0.2222 |
| 0.0028 | 4.0 | 10824 | 0.3376 | 0.1538 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.1.0.dev20230801
- Datasets 2.14.2
- Tokenizers 0.13.3
|
noelsinghsr/sagemaker-distilbert-emotion | noelsinghsr | 2023-09-21T19:26:05Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-21T19:24:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: test
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9125
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2634
- Accuracy: 0.9125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9347 | 1.0 | 500 | 0.2634 | 0.9125 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
NousResearch/Llama-2-70b-chat-hf | NousResearch | 2023-09-21T19:05:17Z | 625 | 19 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-07-19T04:36:22Z | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| |
mirfan899/uner-bert-ner | mirfan899 | 2023-09-21T18:53:09Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-21T18:52:35Z | ---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: uner-bert-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uner-bert-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1354
- Precision: 0.8267
- Recall: 0.8707
- F1: 0.8481
- Accuracy: 0.9640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 144 | 0.1496 | 0.7687 | 0.7971 | 0.7826 | 0.9533 |
| No log | 2.0 | 288 | 0.1429 | 0.7719 | 0.8584 | 0.8129 | 0.9573 |
| No log | 3.0 | 432 | 0.1267 | 0.8014 | 0.8682 | 0.8335 | 0.9629 |
| 0.1628 | 4.0 | 576 | 0.1316 | 0.8206 | 0.8723 | 0.8457 | 0.9644 |
| 0.1628 | 5.0 | 720 | 0.1354 | 0.8267 | 0.8707 | 0.8481 | 0.9640 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
mirfan899/uner-roberta-ner | mirfan899 | 2023-09-21T18:42:07Z | 103 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-21T18:40:21Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: uner-roberta-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uner-roberta-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0930
- Precision: 0.8622
- Recall: 0.9010
- F1: 0.8812
- Accuracy: 0.9728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 144 | 0.1285 | 0.8005 | 0.8241 | 0.8121 | 0.9589 |
| No log | 2.0 | 288 | 0.1142 | 0.8142 | 0.8748 | 0.8434 | 0.9655 |
| No log | 3.0 | 432 | 0.0962 | 0.8485 | 0.8985 | 0.8728 | 0.9702 |
| 0.1923 | 4.0 | 576 | 0.0916 | 0.8543 | 0.9018 | 0.8774 | 0.9719 |
| 0.1923 | 5.0 | 720 | 0.0930 | 0.8622 | 0.9010 | 0.8812 | 0.9728 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Tostino/Inkbot-13b-4k | Tostino | 2023-09-21T18:36:48Z | 9 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-21T05:56:48Z | # Model Card for Inkbot
## Model Details
Inkbot is a conversational AI model designed to interpret and respond to structured prompts with or without contextual information. Built on the latest advancements in natural language processing (NLP) and understanding (NLU), Inkbot provides users with accurate and meaningful interactions, addressing a wide range of queries and topics. Its unique feature lies in the structured prompt system, allowing users to engage in dynamic dialogues that can evolve based on context, metadata, and user input.
### Performance
- The model excels in RAG type queries, answering from context, and overriding memory when available.
- It can handle very large contexts, but may sometimes enter a repeating text loop, especially during complex tasks.
- The model is intended to be more functional and less chatty, avoiding the waste of tokens on superfluous language.
## How to Use
Inkbot uses a structured prompt template system.
### Prompt Template Structure
#### With Context:
```
<#meta#>
- Date: [DATE]
- Task: [TASK TYPE]
<#system#>
[SYSTEM PROMPT/DESCRIPTION OF THE INTERACTION]
<#chat#>
<#user#>
[USER QUESTION]
<#bot#>
[BOT RESPONSE]
<#user#>
[USER FOLLOW-UP QUESTION]
<#user_context#>
[ADDITIONAL CONTEXT FOR THE BOT]
<#bot#>
[BOT RESPONSE BASED ON CONTEXT]
```
#### Without Context:
```
<#meta#>
- Date: [DATE]
- Task: [TASK TYPE]
<#system#>
[SYSTEM PROMPT/DESCRIPTION OF THE INTERACTION]
<#chat#>
<#user#>
[USER QUESTION]
<#bot#>
[BOT RESPONSE]
<#user#>
[USER FOLLOW-UP QUESTION]
<#bot#>
[BOT RESPONSE]
```
### Examples
#### With Context:
```
<#meta#>
- Date: 2023-07-20
- Task: general
<#system#>
A dialogue focused on determining dates based on context clues.
<#chat#>
<#user#>
What is today's date?
<#bot#>
The date is: 2023-07-20
<#user#>
When did the event in the provided context take place?
<#user_context#>
It's the day after new years eve, and I'm starting 2018 with a headache.
<#bot#>
The event in the context took place on: 2018-01-01
```
#### Without Context:
```
<#meta#>
- Date: 2023-07-20
- Task: general
<#system#>
A simple dialogue about daily queries.
<#chat#>
<#user#>
Will it rain today?
<#bot#>
I am not sure, without knowing your location I am unable to determine the weather.
<#user#>
Can you compose a haiku for me?
<#bot#>
Whispers of the breeze,
Nature's melodies unfold,
Dawn dispels the night.
```
## Task Options
Inkbot has been trained for a variety of tasks. Below are some of the key task options you can utilize, but feel free to try others to test the effect:
1. **general**: This is the default task and is designed for a broad range of general questions and interactions.
- Usage: Suitable for most day-to-day interactions and queries.
2. **knowledge_graph**: This task involves extracting, understanding, and representing information in a structured way.
- Usage: When you want to extract relationships between entities or desire structured representations of data.
3. **question_answer**: Explicitly trained for answering questions in a straightforward manner.
- Usage: Best used when you have direct questions and expect concise answers.
4. **reasoning**: Allows Inkbot to showcase its logical and deductive reasoning capabilities.
- Usage: Ideal for puzzles, riddles, or scenarios where logical analysis is required.
5. **translation**: Use this for language translation tasks.
- Usage: Provide a sentence or paragraph in one language, and specify the desired target language for translation.
6. **summarization**: Trained for condensing large texts into shorter, coherent summaries.
- Usage: When you have a lengthy text or article that you want to be summarized to its key points.
7. **creative_writing**: Engage Inkbot in composing stories, poetry, and other creative content.
- Usage: For tasks that require imaginative and original content generation.
## Limitations
- Adhere to the prompt structure for best results.
- When providing contextual details, clarity is essential for Inkbot to derive accurate and meaningful responses.
- The overriding memory from user_context property generally only works for the next prompt or two, after which the model reverts to its original behavior.
- On complex tasks, like creating a coherent story based on a set of facts from context, there's a potential for a repeating text loop as context fills.
- Sometimes the model doesn't know when to end a knowledge graph, which can result in adding nodes and edges until it runs out of context.
## Additional Notes
- The 'date', 'task', and 'system' are crucial metadata components that need to be provided outside the core dialogue.
- Use the 'user_context' when you want to offer supplementary context that guides Inkbot's response. You can interleave it in the chat log as necessary.
- The specific tag format, such as `<#word#>`, is used to because there are filters in a lot of APIs for <|word|> and this makes interactions easier.
---
license: llama2
---
|
dancingninjas/sentiment-model | dancingninjas | 2023-09-21T18:24:24Z | 68 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"text classification",
"transformer",
"sentiment analysis",
"TensorFlow",
"en",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-21T10:54:56Z | ---
license: apache-2.0
datasets:
- imdb
language:
- en
pipeline_tag: text-classification
tags:
- text classification
- transformer
- sentiment analysis
- distilbert
- TensorFlow
--- |
sudhanvasp/Sentiment-Analysis | sudhanvasp | 2023-09-21T18:19:27Z | 164 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-20T13:47:15Z | # SentimentAnalysis
Sentiment analysis with NLTK (Folder 79)
Sentiment analysis with Roberta (Folder 159)
Sentiment analysis with Roberta+Awk (Folder 209)
Sentiment analysis with Roberta+Gradio (Folder 219)
<!-- MARKER: Start of README -->
# Stock Sentiment Analysis of Tweets using RoBERTa





## Table of Contents
- [Project Description](#project-description)
- [Objective](#objective)
- [Hypotheses](#hypotheses)
- [Data Collection](#data-collection)
- [Sentiment Analysis](#sentiment-analysis)
- [Machine Learning Model](#machine-learning-model)
- [Running the Model](#running-the-model)
- [Huggingface](https://huggingface.co/sudhanvasp/Sentiment-Analysis)
- [Results and Insights](#results-and-insights)
- [License](#license)
---
<!-- MARKER: Project Description -->
## Project Description
Welcome to the Stock Sentiment Analysis project! This repository houses the code and resources for analyzing Twitter data to predict stock price movements based on sentiment analysis, leveraging the powerful RoBERTa model. Gain valuable insights into market sentiment and enhance your trading strategies.
<!-- MARKER: Objective -->
## Objective
The primary aim of this project is to explore the intricate relationship between sentiment expressed in tweets and short-term stock price movements.
<!-- MARKER: Hypotheses -->
## Hypotheses
- *Hypothesis 1:* Tweets with a positive sentiment will exhibit a positive correlation with stock price increases.
- *Hypothesis 2:* Tweets with a negative sentiment will display a negative correlation with stock price decreases.
- *Hypothesis 3:* Tweets with a neutral sentiment will display a neutral correlation with stock price.
<!-- MARKER: Data Collection -->
## Data Collection
- We meticulously gathered Twitter data from financial news and analyst accounts.
- Data preprocessing was performed, encompassing deduplication, tokenization, and sentiment label encoding (positive, negative, neutral).
<!-- MARKER: Sentiment Analysis -->
## Sentiment Analysis
- Harnessing RoBERTa, a state-of-the-art transformer-based model, we assigned sentiment scores.
- Challenges such as domain-specific sentiment expressions and model fine-tuning were addressed.
<!-- MARKER: Machine Learning Model -->
## Machine Learning Model
- Our model is a robust ensemble of RoBERTa.
- Features encompass RoBERTa-generated F1 scores, tweet volume, and historical stock price data.
- This amalgamation empowers us to capture both sequential dependencies and non-linear relationships effectively.
<!-- MARKER: Running the Model -->
## Running the Model
## Hosting with Gradio
1. *Install Gradio:*
```bash
pip install gradio
import gradio as gr
2. Run the given gradio code in the Folder 219.
## Hosting with FLASK
1. *Install FLASK:*
```bash
pip install flask
cd 209
cd twitterka
python app.py
2. Open the IP given address.
<!-- MARKER: Project Description -->
## Huggingface Page
- Execution of the model can be done directly on Huggingface as well
- [Huggingface](https://huggingface.co/sudhanvasp/Sentiment-Analysis)
<!-- MARKER: Results and Insights-->
## Results and Insights
- Our ensemble model boasts an impressive 96% accuracy in sentiment analysis.
- Notably, positive sentiment tweets correlate positively with stock price increases, while negative sentiment tweets correlate negatively with decreases. Neutral sentiment, while present, exhibits a weaker influence on stock price movements.
<!-- MARKER: License-->
## License
- This created by the team "The Lost Pendrive" (Sudhanva SP, Deepa Umesh, Chinmayi Rajaram)
|
cloudwalkerw/wavlm-base_3 | cloudwalkerw | 2023-09-21T17:48:37Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"wavlm",
"audio-classification",
"generated_from_trainer",
"base_model:microsoft/wavlm-base",
"base_model:finetune:microsoft/wavlm-base",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-09-18T14:31:32Z | ---
base_model: microsoft/wavlm-base
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wavlm-base_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-base_3
This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6534
- Accuracy: 0.8974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 2
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2236 | 1.24 | 100 | 12.8495 | 0.4467 |
| 0.0514 | 2.48 | 200 | 16.3078 | 0.2677 |
| 0.0 | 3.72 | 300 | 17.5651 | 0.2597 |
| 0.3252 | 4.95 | 400 | 15.0382 | 0.1912 |
| 1.0577 | 6.19 | 500 | 0.6534 | 0.8974 |
| 0.6973 | 7.43 | 600 | 0.7352 | 0.1026 |
| 0.6939 | 8.67 | 700 | 0.6210 | 0.8974 |
| 0.6944 | 9.91 | 800 | 0.7129 | 0.1026 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.0.post302
- Datasets 2.14.5
- Tokenizers 0.13.3
|
darxkies/bart-large-cnn-samsum-ChatGPT_v3 | darxkies | 2023-09-21T17:47:31Z | 10 | 1 | transformers | [
"transformers",
"rust",
"bart",
"text2text-generation",
"summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2023-09-21T15:44:26Z | ---
pipeline_tag: summarization
---
Original model: [Qiliang/bart-large-cnn-samsum-ChatGPT_v3](https://https://huggingface.co/Qiliang/bart-large-cnn-samsum-ChatGPT_v3)
Added files for [rust-bert](https://github.com/guillaume-be/rust-bert) |
TuningAI/DETR-BASE_Marine | TuningAI | 2023-09-21T17:47:20Z | 167 | 1 | transformers | [
"transformers",
"pytorch",
"detr",
"object-detection",
"climate",
"ar",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2023-09-21T08:29:14Z | ---
license: apache-2.0
language:
- ar
- en
library_name: transformers
pipeline_tag: object-detection
tags:
- climate
---
# DETR-BASE_Marine
## Overview
+ Model Name: DETR-BASE_Marine
+ Model Architecture: DETR (End-to-End Object Detection) with ResNet-50 backbone.
+ Model Type: Object Detection
+ Framework: PyTorch
+ Dataset: Aerial Maritime Image Dataset
+ License: MIT License (for the dataset)
## Model Description
The DETR-BASE_Marine Aerial Maritime Detector is a deep learning model based on the DETR architecture with a ResNet-50 backbone. It has been fine-tuned on the "Aerial Maritime Image Dataset," which comprises 74 aerial photographs captured via a Mavic Air 2 drone. The model is designed for object detection tasks in maritime environments and can identify and locate various objects such as docks, boats, lifts, jetskis, and cars in aerial images.
## Key Features:
+ Multi-class object detection.
+ Object classes: Docks, Boats, Lifts, Jetskis, Cars.
+ Robust performance in aerial and maritime scenarios.
## Use Cases
+ **Boat Counting**: Count the number of boats on water bodies, such as lakes, using drone imagery.
+ **Boat Lift Detection**: Identify the presence of boat lifts on the waterfront via aerial surveillance.
+ **Car Detection**: Detect and locate cars within maritime regions using UAV drones.
+ **Habitability Assessment**: Determine the level of inhabitation around lakes and water bodies based on detected objects.
+ **Property Monitoring**: Identify if visitors or activities are present at lake houses or properties using drone surveillance.
+ **Proof of Concept**: Showcase the potential of UAV imagery for maritime projects and object detection tasks.
## Dataset
+ **Dataset Name**: Aerial Maritime Image Dataset
+ **Number of Images**: 74
+ **Number of Bounding Boxes**: 1,151
+ **Collection Method**: Captured via Mavic Air 2 drone at 400 ft altitude.
## Usage
``` python
from transformers import DetrImageProcessor, DetrForObjectDetection
import torch
from PIL import Image
img_path = ""
image = Image.open(img_path)
extractor = AutoFeatureExtractor.from_pretrained("TuningAI/DETR-BASE_Marine")
model = AutoModelForObjectDetection.from_pretrained("TuningAI/DETR-BASE_Marine")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# convert outputs (bounding boxes and class logits) to COCO API
# let's only keep detections with score > 0.9
target_sizes = torch.tensor([image.size[::-1]])
results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
box = [round(i, 2) for i in box.tolist()]
print(
f"Detected {model.config.id2label[label.item()]} with confidence "
f"{round(score.item(), 3)} at location {box}"
)
```
## License
This model is provided under the MIT License.
The Aerial Maritime Image Dataset used for fine-tuning is also under the MIT License. |
darxkies/bge-base-en-v1.5 | darxkies | 2023-09-21T17:47:07Z | 2 | 0 | transformers | [
"transformers",
"rust",
"bert",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-09-20T21:36:50Z | ---
pipeline_tag: sentence-similarity
---
Original model: [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
Added files for [rust-bert](https://github.com/guillaume-be/rust-bert) |
megagonlabs/transformers-ud-japanese-electra-base-ginza-520 | megagonlabs | 2023-09-21T17:45:45Z | 125 | 3 | transformers | [
"transformers",
"pytorch",
"electra",
"feature-extraction",
"PyTorch",
"Transformers",
"spaCy",
"ELECTRA",
"GiNZA",
"mC4",
"UD_Japanese-BCCWJ",
"GSK2014-A",
"ja",
"MIT",
"arxiv:1910.10683",
"license:mit",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2023-09-21T14:14:04Z | ---
language:
- ja
thumbnail: "https://raw.githubusercontent.com/megagonlabs/ginza/static/docs/images/GiNZA_logo_4c_s.png"
tags:
- PyTorch
- Transformers
- spaCy
- ELECTRA
- GiNZA
- mC4
- UD_Japanese-BCCWJ
- GSK2014-A
- ja
- MIT
license: "mit"
datasets:
- mC4
- UD_Japanese_BCCWJ-r2.8
- GSK2014-A(2019)
metrics:
- UAS
- LAS
- UPOS
---
# transformers-ud-japanese-electra-ginza-520 (sudachitra-wordpiece, mC4 Japanese)
This is an [ELECTRA](https://github.com/google-research/electra) model pretrained on approximately 200M Japanese sentences extracted from the [mC4](https://huggingface.co/datasets/mc4) and finetuned by [spaCy v3](https://spacy.io/usage/v3) on [UD\_Japanese\_BCCWJ r2.8](https://universaldependencies.org/treebanks/ja_bccwj/index.html).
The base pretrain model is [megagonlabs/transformers-ud-japanese-electra-base-discrimininator](https://huggingface.co/megagonlabs/transformers-ud-japanese-electra-base-discriminator).
The entire spaCy v3 model is distributed as a python package named [`ja_ginza_electra`](https://pypi.org/project/ja-ginza-electra/) from PyPI along with [`GiNZA v5`](https://github.com/megagonlabs/ginza) which provides some custom pipeline components to recognize the Japanese bunsetu-phrase structures.
Try running it as below:
```console
$ pip install ginza ja_ginza_electra
$ ginza
```
## Licenses
The models are distributed under the terms of the [MIT License](https://opensource.org/licenses/mit-license.php).
## Acknowledgments
This model is permitted to be published under the `MIT License` under a joint research agreement between NINJAL (National Institute for Japanese Language and Linguistics) and Megagon Labs Tokyo.
## Citations
- [mC4](https://huggingface.co/datasets/mc4)
Contains information from `mC4` which is made available under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/).
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
- [UD\_Japanese\_BCCWJ r2.8](https://universaldependencies.org/treebanks/ja_bccwj/index.html)
```
Asahara, M., Kanayama, H., Tanaka, T., Miyao, Y., Uematsu, S., Mori, S.,
Matsumoto, Y., Omura, M., & Murawaki, Y. (2018).
Universal Dependencies Version 2 for Japanese.
In LREC-2018.
```
- [GSK2014-A(2019)](https://www.gsk.or.jp/catalog/gsk2014-a/)
|
kmposkid1/Horse-Health-Outcome-6d56348a-8c02-416a-8d5d-fdf98b0d4f59 | kmposkid1 | 2023-09-21T17:38:09Z | 0 | 0 | sklearn | [
"sklearn",
"skops",
"tabular-classification",
"license:mit",
"region:us"
]
| tabular-classification | 2023-09-20T13:45:41Z | ---
license: mit
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
model_format: pickle
model_file: LightGBM_without_hospital_number_01.pkl
widget:
structuredData:
abdomen:
- distend_small
- distend_small
- distend_large
abdominal_distention:
- none
- none
- moderate
abdomo_appearance:
- serosanguious
- cloudy
- serosanguious
abdomo_protein:
- 4.1
- 4.3
- 2.0
age:
- adult
- adult
- adult
capillary_refill_time:
- less_3_sec
- less_3_sec
- more_3_sec
cp_data:
- 'yes'
- 'yes'
- 'no'
lesion_1:
- 7209
- 2112
- 5400
lesion_2:
- 0
- 0
- 0
lesion_3:
- 0
- 0
- 0
mucous_membrane:
- bright_pink
- bright_pink
- dark_cyanotic
nasogastric_reflux:
- none
- none
- more_1_liter
nasogastric_reflux_ph:
- 7.0
- 3.5
- 2.0
nasogastric_tube:
- slight
- none
- significant
packed_cell_volume:
- 37.0
- 44.0
- 65.0
pain:
- depressed
- mild_pain
- extreme_pain
peripheral_pulse:
- normal
- normal
- reduced
peristalsis:
- hypermotile
- hypomotile
- absent
pulse:
- 84.0
- 66.0
- 72.0
rectal_exam_feces:
- absent
- decreased
- absent
rectal_temp:
- 39.0
- 38.5
- 37.3
respiratory_rate:
- 24.0
- 21.0
- 30.0
surgery:
- 'yes'
- 'yes'
- 'yes'
surgical_lesion:
- 'yes'
- 'yes'
- 'yes'
temp_of_extremities:
- cool
- normal
- cool
total_protein:
- 6.5
- 7.6
- 13.0
---
# Model description
This is a `LightGBM` model trained on horse health outcome data from Kaggle.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
[More Information Needed]
### Hyperparameters
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|
| memory | |
| steps | [('preprocessor', ColumnTransformer(remainder='passthrough',<br /> transformers=[('num',<br /> Pipeline(steps=[('imputer',<br /> SimpleImputer(strategy='median')),<br /> ('scaler', StandardScaler())]),<br /> ['rectal_temp', 'pulse', 'respiratory_rate',<br /> 'nasogastric_reflux_ph', 'packed_cell_volume',<br /> 'total_protein', 'abdomo_protein', 'lesion_1',<br /> 'lesion_2', 'lesion_3']),<br /> ('cat',<br /> Pipeline(steps=[('imputer',<br /> SimpleI...='missing',<br /> strategy='constant')),<br /> ('onehot',<br /> OneHotEncoder(handle_unknown='ignore'))]),<br /> ['surgery', 'age', 'temp_of_extremities',<br /> 'peripheral_pulse', 'mucous_membrane',<br /> 'capillary_refill_time', 'pain',<br /> 'peristalsis', 'abdominal_distention',<br /> 'nasogastric_tube', 'nasogastric_reflux',<br /> 'rectal_exam_feces', 'abdomen',<br /> 'abdomo_appearance', 'surgical_lesion',<br /> 'cp_data'])])), ('classifier', LGBMClassifier(max_depth=3))] |
| verbose | False |
| preprocessor | ColumnTransformer(remainder='passthrough',<br /> transformers=[('num',<br /> Pipeline(steps=[('imputer',<br /> SimpleImputer(strategy='median')),<br /> ('scaler', StandardScaler())]),<br /> ['rectal_temp', 'pulse', 'respiratory_rate',<br /> 'nasogastric_reflux_ph', 'packed_cell_volume',<br /> 'total_protein', 'abdomo_protein', 'lesion_1',<br /> 'lesion_2', 'lesion_3']),<br /> ('cat',<br /> Pipeline(steps=[('imputer',<br /> SimpleI...='missing',<br /> strategy='constant')),<br /> ('onehot',<br /> OneHotEncoder(handle_unknown='ignore'))]),<br /> ['surgery', 'age', 'temp_of_extremities',<br /> 'peripheral_pulse', 'mucous_membrane',<br /> 'capillary_refill_time', 'pain',<br /> 'peristalsis', 'abdominal_distention',<br /> 'nasogastric_tube', 'nasogastric_reflux',<br /> 'rectal_exam_feces', 'abdomen',<br /> 'abdomo_appearance', 'surgical_lesion',<br /> 'cp_data'])]) |
| classifier | LGBMClassifier(max_depth=3) |
| preprocessor__n_jobs | |
| preprocessor__remainder | passthrough |
| preprocessor__sparse_threshold | 0.3 |
| preprocessor__transformer_weights | |
| preprocessor__transformers | [('num', Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),<br /> ('scaler', StandardScaler())]), ['rectal_temp', 'pulse', 'respiratory_rate', 'nasogastric_reflux_ph', 'packed_cell_volume', 'total_protein', 'abdomo_protein', 'lesion_1', 'lesion_2', 'lesion_3']), ('cat', Pipeline(steps=[('imputer',<br /> SimpleImputer(fill_value='missing', strategy='constant')),<br /> ('onehot', OneHotEncoder(handle_unknown='ignore'))]), ['surgery', 'age', 'temp_of_extremities', 'peripheral_pulse', 'mucous_membrane', 'capillary_refill_time', 'pain', 'peristalsis', 'abdominal_distention', 'nasogastric_tube', 'nasogastric_reflux', 'rectal_exam_feces', 'abdomen', 'abdomo_appearance', 'surgical_lesion', 'cp_data'])] |
| preprocessor__verbose | False |
| preprocessor__verbose_feature_names_out | True |
| preprocessor__num | Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),<br /> ('scaler', StandardScaler())]) |
| preprocessor__cat | Pipeline(steps=[('imputer',<br /> SimpleImputer(fill_value='missing', strategy='constant')),<br /> ('onehot', OneHotEncoder(handle_unknown='ignore'))]) |
| preprocessor__num__memory | |
| preprocessor__num__steps | [('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler())] |
| preprocessor__num__verbose | False |
| preprocessor__num__imputer | SimpleImputer(strategy='median') |
| preprocessor__num__scaler | StandardScaler() |
| preprocessor__num__imputer__add_indicator | False |
| preprocessor__num__imputer__copy | True |
| preprocessor__num__imputer__fill_value | |
| preprocessor__num__imputer__keep_empty_features | False |
| preprocessor__num__imputer__missing_values | nan |
| preprocessor__num__imputer__strategy | median |
| preprocessor__num__scaler__copy | True |
| preprocessor__num__scaler__with_mean | True |
| preprocessor__num__scaler__with_std | True |
| preprocessor__cat__memory | |
| preprocessor__cat__steps | [('imputer', SimpleImputer(fill_value='missing', strategy='constant')), ('onehot', OneHotEncoder(handle_unknown='ignore'))] |
| preprocessor__cat__verbose | False |
| preprocessor__cat__imputer | SimpleImputer(fill_value='missing', strategy='constant') |
| preprocessor__cat__onehot | OneHotEncoder(handle_unknown='ignore') |
| preprocessor__cat__imputer__add_indicator | False |
| preprocessor__cat__imputer__copy | True |
| preprocessor__cat__imputer__fill_value | missing |
| preprocessor__cat__imputer__keep_empty_features | False |
| preprocessor__cat__imputer__missing_values | nan |
| preprocessor__cat__imputer__strategy | constant |
| preprocessor__cat__onehot__categories | auto |
| preprocessor__cat__onehot__drop | |
| preprocessor__cat__onehot__dtype | <class 'numpy.float64'> |
| preprocessor__cat__onehot__feature_name_combiner | concat |
| preprocessor__cat__onehot__handle_unknown | ignore |
| preprocessor__cat__onehot__max_categories | |
| preprocessor__cat__onehot__min_frequency | |
| preprocessor__cat__onehot__sparse | deprecated |
| preprocessor__cat__onehot__sparse_output | True |
| classifier__boosting_type | gbdt |
| classifier__class_weight | |
| classifier__colsample_bytree | 1.0 |
| classifier__importance_type | split |
| classifier__learning_rate | 0.1 |
| classifier__max_depth | 3 |
| classifier__min_child_samples | 20 |
| classifier__min_child_weight | 0.001 |
| classifier__min_split_gain | 0.0 |
| classifier__n_estimators | 100 |
| classifier__n_jobs | |
| classifier__num_leaves | 31 |
| classifier__objective | |
| classifier__random_state | |
| classifier__reg_alpha | 0.0 |
| classifier__reg_lambda | 0.0 |
| classifier__subsample | 1.0 |
| classifier__subsample_for_bin | 200000 |
| classifier__subsample_freq | 0 |
</details>
### Model Plot
<style>#sk-container-id-3 {color: black;}#sk-container-id-3 pre{padding: 0;}#sk-container-id-3 div.sk-toggleable {background-color: white;}#sk-container-id-3 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-3 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-3 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-3 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-3 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-3 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-3 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-3 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-3 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-3 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-3 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-3 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-3 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-3 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-3 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-3 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-3 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-3 div.sk-item {position: relative;z-index: 1;}#sk-container-id-3 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-3 div.sk-item::before, #sk-container-id-3 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-3 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-3 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-3 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-3 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-3 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-3 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-3 div.sk-label-container {text-align: center;}#sk-container-id-3 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-3 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-3" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('preprocessor',ColumnTransformer(remainder='passthrough',transformers=[('num',Pipeline(steps=[('imputer',SimpleImputer(strategy='median')),('scaler',StandardScaler())]),['rectal_temp', 'pulse','respiratory_rate','nasogastric_reflux_ph','packed_cell_volume','total_protein','abdomo_protein', 'lesion_1','lesion_2', 'lesion_3']),('cat',Pi...OneHotEncoder(handle_unknown='ignore'))]),['surgery', 'age','temp_of_extremities','peripheral_pulse','mucous_membrane','capillary_refill_time','pain', 'peristalsis','abdominal_distention','nasogastric_tube','nasogastric_reflux','rectal_exam_feces','abdomen','abdomo_appearance','surgical_lesion','cp_data'])])),('classifier', LGBMClassifier(max_depth=3))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-23" type="checkbox" ><label for="sk-estimator-id-23" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('preprocessor',ColumnTransformer(remainder='passthrough',transformers=[('num',Pipeline(steps=[('imputer',SimpleImputer(strategy='median')),('scaler',StandardScaler())]),['rectal_temp', 'pulse','respiratory_rate','nasogastric_reflux_ph','packed_cell_volume','total_protein','abdomo_protein', 'lesion_1','lesion_2', 'lesion_3']),('cat',Pi...OneHotEncoder(handle_unknown='ignore'))]),['surgery', 'age','temp_of_extremities','peripheral_pulse','mucous_membrane','capillary_refill_time','pain', 'peristalsis','abdominal_distention','nasogastric_tube','nasogastric_reflux','rectal_exam_feces','abdomen','abdomo_appearance','surgical_lesion','cp_data'])])),('classifier', LGBMClassifier(max_depth=3))])</pre></div></div></div><div class="sk-serial"><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-24" type="checkbox" ><label for="sk-estimator-id-24" class="sk-toggleable__label sk-toggleable__label-arrow">preprocessor: ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(remainder='passthrough',transformers=[('num',Pipeline(steps=[('imputer',SimpleImputer(strategy='median')),('scaler', StandardScaler())]),['rectal_temp', 'pulse', 'respiratory_rate','nasogastric_reflux_ph', 'packed_cell_volume','total_protein', 'abdomo_protein', 'lesion_1','lesion_2', 'lesion_3']),('cat',Pipeline(steps=[('imputer',SimpleI...='missing',strategy='constant')),('onehot',OneHotEncoder(handle_unknown='ignore'))]),['surgery', 'age', 'temp_of_extremities','peripheral_pulse', 'mucous_membrane','capillary_refill_time', 'pain','peristalsis', 'abdominal_distention','nasogastric_tube', 'nasogastric_reflux','rectal_exam_feces', 'abdomen','abdomo_appearance', 'surgical_lesion','cp_data'])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-25" type="checkbox" ><label for="sk-estimator-id-25" class="sk-toggleable__label sk-toggleable__label-arrow">num</label><div class="sk-toggleable__content"><pre>['rectal_temp', 'pulse', 'respiratory_rate', 'nasogastric_reflux_ph', 'packed_cell_volume', 'total_protein', 'abdomo_protein', 'lesion_1', 'lesion_2', 'lesion_3']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-26" type="checkbox" ><label for="sk-estimator-id-26" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer(strategy='median')</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-27" type="checkbox" ><label for="sk-estimator-id-27" class="sk-toggleable__label sk-toggleable__label-arrow">StandardScaler</label><div class="sk-toggleable__content"><pre>StandardScaler()</pre></div></div></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-28" type="checkbox" ><label for="sk-estimator-id-28" class="sk-toggleable__label sk-toggleable__label-arrow">cat</label><div class="sk-toggleable__content"><pre>['surgery', 'age', 'temp_of_extremities', 'peripheral_pulse', 'mucous_membrane', 'capillary_refill_time', 'pain', 'peristalsis', 'abdominal_distention', 'nasogastric_tube', 'nasogastric_reflux', 'rectal_exam_feces', 'abdomen', 'abdomo_appearance', 'surgical_lesion', 'cp_data']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-29" type="checkbox" ><label for="sk-estimator-id-29" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer(fill_value='missing', strategy='constant')</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-30" type="checkbox" ><label for="sk-estimator-id-30" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder(handle_unknown='ignore')</pre></div></div></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-31" type="checkbox" ><label for="sk-estimator-id-31" class="sk-toggleable__label sk-toggleable__label-arrow">remainder</label><div class="sk-toggleable__content"><pre>[]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-32" type="checkbox" ><label for="sk-estimator-id-32" class="sk-toggleable__label sk-toggleable__label-arrow">passthrough</label><div class="sk-toggleable__content"><pre>passthrough</pre></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-33" type="checkbox" ><label for="sk-estimator-id-33" class="sk-toggleable__label sk-toggleable__label-arrow">LGBMClassifier</label><div class="sk-toggleable__content"><pre>LGBMClassifier(max_depth=3)</pre></div></div></div></div></div></div></div>
## Evaluation Results
| Metric | Value |
|----------|----------|
| accuracy | 0.740891 |
| f1 score | 0.740891 |
### Confusion Matrix

## Permutation Importance

# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
kmposkid
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
|
benedikt-schaber/q-Taxi-v3 | benedikt-schaber | 2023-09-21T17:35:46Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-21T17:35:45Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="benedikt-schaber/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AmirH98/Q-Taxi-V3 | AmirH98 | 2023-09-21T17:35:01Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-21T17:35:00Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Q-Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AmirH98/Q-Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
steveice/videomae-large-finetuned-kinetics-finetuned-videomae-large-kitchen | steveice | 2023-09-21T17:13:55Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-large-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-large-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| video-classification | 2023-09-20T21:16:12Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-large-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-large-finetuned-kinetics-finetuned-videomae-large-kitchen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-large-finetuned-kinetics-finetuned-videomae-large-kitchen
This model is a fine-tuned version of [MCG-NJU/videomae-large-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-large-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6309
- Accuracy: 0.8900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 11100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.5158 | 0.02 | 222 | 3.6067 | 0.0588 |
| 2.8571 | 1.02 | 444 | 3.1445 | 0.3014 |
| 1.8854 | 2.02 | 666 | 2.3644 | 0.4607 |
| 1.5533 | 3.02 | 888 | 1.7967 | 0.5621 |
| 1.3935 | 4.02 | 1110 | 1.3755 | 0.6502 |
| 1.1722 | 5.02 | 1332 | 1.2232 | 0.7109 |
| 0.2896 | 6.02 | 1554 | 1.2859 | 0.6256 |
| 0.3166 | 7.02 | 1776 | 1.2910 | 0.6720 |
| 0.6902 | 8.02 | 1998 | 1.2702 | 0.6995 |
| 0.4193 | 9.02 | 2220 | 1.2087 | 0.7137 |
| 0.1889 | 10.02 | 2442 | 1.0500 | 0.7611 |
| 0.4502 | 11.02 | 2664 | 1.1647 | 0.7118 |
| 0.7703 | 12.02 | 2886 | 1.1037 | 0.7242 |
| 0.0957 | 13.02 | 3108 | 1.0967 | 0.7706 |
| 0.3202 | 14.02 | 3330 | 1.0479 | 0.7545 |
| 0.3634 | 15.02 | 3552 | 1.0714 | 0.8057 |
| 0.3883 | 16.02 | 3774 | 1.2323 | 0.7498 |
| 0.0322 | 17.02 | 3996 | 1.0504 | 0.7848 |
| 0.5108 | 18.02 | 4218 | 1.1356 | 0.7915 |
| 0.309 | 19.02 | 4440 | 1.1409 | 0.7592 |
| 0.56 | 20.02 | 4662 | 1.0828 | 0.7915 |
| 0.3675 | 21.02 | 4884 | 0.9154 | 0.8123 |
| 0.0076 | 22.02 | 5106 | 1.0974 | 0.8133 |
| 0.0451 | 23.02 | 5328 | 1.0361 | 0.8152 |
| 0.2558 | 24.02 | 5550 | 0.7830 | 0.8237 |
| 0.0125 | 25.02 | 5772 | 0.8728 | 0.8171 |
| 0.4184 | 26.02 | 5994 | 0.8413 | 0.8265 |
| 0.2566 | 27.02 | 6216 | 1.0644 | 0.8009 |
| 0.1257 | 28.02 | 6438 | 0.8641 | 0.8265 |
| 0.1326 | 29.02 | 6660 | 0.8444 | 0.8417 |
| 0.0436 | 30.02 | 6882 | 0.8615 | 0.8322 |
| 0.0408 | 31.02 | 7104 | 0.8075 | 0.8332 |
| 0.0316 | 32.02 | 7326 | 0.8699 | 0.8341 |
| 0.2235 | 33.02 | 7548 | 0.8151 | 0.8455 |
| 0.0079 | 34.02 | 7770 | 0.8099 | 0.8550 |
| 0.001 | 35.02 | 7992 | 0.8640 | 0.8370 |
| 0.0007 | 36.02 | 8214 | 0.7146 | 0.8483 |
| 0.464 | 37.02 | 8436 | 0.7917 | 0.8464 |
| 0.0005 | 38.02 | 8658 | 0.7239 | 0.8531 |
| 0.0004 | 39.02 | 8880 | 0.7702 | 0.8701 |
| 0.1705 | 40.02 | 9102 | 0.7543 | 0.8521 |
| 0.0039 | 41.02 | 9324 | 0.7456 | 0.8673 |
| 0.0168 | 42.02 | 9546 | 0.7255 | 0.8730 |
| 0.2615 | 43.02 | 9768 | 0.7453 | 0.8758 |
| 0.0004 | 44.02 | 9990 | 0.6824 | 0.8806 |
| 0.236 | 45.02 | 10212 | 0.6624 | 0.8825 |
| 0.0007 | 46.02 | 10434 | 0.6727 | 0.8815 |
| 0.0004 | 47.02 | 10656 | 0.6478 | 0.8863 |
| 0.268 | 48.02 | 10878 | 0.6309 | 0.8900 |
| 0.0025 | 49.02 | 11100 | 0.6284 | 0.8900 |
### Framework versions
- Transformers 4.33.2
- Pytorch 1.12.1+cu113
- Datasets 2.14.5
- Tokenizers 0.13.3
|
annahaz/xlm-roberta-base-misogyny-sexism-tweets | annahaz | 2023-09-21T17:11:00Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-19T17:14:54Z |
This model was an experiment BUT NOT THE FINAL MODEL.
The final model was ***annahaz/xlm-roberta-base-misogyny-sexism-indomain-mix-bal*** (https://huggingface.co/annahaz/xlm-roberta-base-misogyny-sexism-indomain-mix-bal)
Please consider using/trying that model instead.
This model was an experiment for the following paper BUT THIS MODEL IS NOT THE FINAL MODEL:
```
@InProceedings{10.1007/978-3-031-43129-6_9,
author="Chang, Rong-Ching
and May, Jonathan
and Lerman, Kristina",
editor="Thomson, Robert
and Al-khateeb, Samer
and Burger, Annetta
and Park, Patrick
and A. Pyke, Aryn",
title="Feedback Loops and Complex Dynamics of Harmful Speech in Online Discussions",
booktitle="Social, Cultural, and Behavioral Modeling",
year="2023",
publisher="Springer Nature Switzerland",
address="Cham",
pages="85--94",
abstract="Harmful and toxic speech contribute to an unwelcoming online environment that suppresses participation and conversation. Efforts have focused on detecting and mitigating harmful speech; however, the mechanisms by which toxicity degrades online discussions are not well understood. This paper makes two contributions. First, to comprehensively model harmful comments, we introduce a multilingual misogyny and sexist speech detection model (https://huggingface.co/annahaz/xlm-roberta-base-misogyny-sexism-indomain-mix-bal). Second, we model the complex dynamics of online discussions as feedback loops in which harmful comments lead to negative emotions which prompt even more harmful comments. To quantify the feedback loops, we use a combination of mutual Granger causality and regression to analyze discussions on two political forums on Reddit: the moderated political forum r/Politics and the moderated neutral political forum r/NeutralPolitics. Our results suggest that harmful comments and negative emotions create self-reinforcing feedback loops in forums. Contrarily, moderation with neutral discussion appears to tip interactions into self-extinguishing feedback loops that reduce harmful speech and negative emotions. Our study sheds more light on the complex dynamics of harmful speech and the role of moderation and neutral discussion in mitigating these dynamics.",
isbn="978-3-031-43129-6"
}
```
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-misogyny-sexism-tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-misogyny-sexism-tweets
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5009
- Accuracy: 0.796
- F1: 0.8132
- Precision: 0.75
- Recall: 0.888
- Mae: 0.204
- Tn: 352
- Fp: 148
- Fn: 56
- Tp: 444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | Tn | Fp | Fn | Tp |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-----:|:---:|:---:|:--:|:---:|
| 0.4947 | 1.0 | 1646 | 0.4683 | 0.765 | 0.7866 | 0.7205 | 0.866 | 0.235 | 332 | 168 | 67 | 433 |
| 0.4285 | 2.0 | 3292 | 0.4514 | 0.779 | 0.8004 | 0.7298 | 0.886 | 0.221 | 336 | 164 | 57 | 443 |
| 0.3721 | 3.0 | 4938 | 0.4430 | 0.781 | 0.8060 | 0.7234 | 0.91 | 0.219 | 326 | 174 | 45 | 455 |
| 0.3127 | 4.0 | 6584 | 0.5009 | 0.796 | 0.8132 | 0.75 | 0.888 | 0.204 | 352 | 148 | 56 | 444 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
newronai/clma2-13b-Chat-Adapter-NasdaqBalanced-1epoch | newronai | 2023-09-21T17:05:34Z | 2 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-21T17:05:26Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
calculater/looking-up | calculater | 2023-09-21T17:03:38Z | 0 | 4 | null | [
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-21T15:55:59Z | ---
pipeline_tag: text-to-image
license: creativeml-openrail-m
---
[looking-up](https://huggingface.co/hhpoo/looking-up/blob/main/looking-up.safetensors)
視線を上げるloraです。ファイルをwebuiのlora指定フォルダ内に入れてご使用ください。
トリガータグ等は指定しておらず、適用するだけで画面上方を見上げるはずです。
weightをマイナスにすることで下を見るようにもできるかもしれません。
 |
amirabdullah19852020/pythia-410m_utility_reward | amirabdullah19852020 | 2023-09-21T17:03:36Z | 59 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| reinforcement-learning | 2023-09-21T15:26:30Z | ---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="amirabdullah19852020//tmp/tmplbhw73mv/amirabdullah19852020/pythia-410m_utility_reward")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("amirabdullah19852020//tmp/tmplbhw73mv/amirabdullah19852020/pythia-410m_utility_reward")
model = AutoModelForCausalLMWithValueHead.from_pretrained("amirabdullah19852020//tmp/tmplbhw73mv/amirabdullah19852020/pythia-410m_utility_reward")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
TheBlokeAI/jackfram_llama-68m-GPTQ | TheBlokeAI | 2023-09-21T16:33:08Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2023-09-21T16:30:27Z | A 4-bit, 128g, act_order=True GPTQ quantisation of JackFram/llama-68m, a 68 million parameter Llama1 model; created on request for software testing.
Not for normal usage! |
gpadam/autotrain-prospero-query-training-87679143506 | gpadam | 2023-09-21T16:23:36Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:gpadam/autotrain-data-prospero-query-training",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2023-09-07T11:44:46Z | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- gpadam/autotrain-data-prospero-query-training
co2_eq_emissions:
emissions: 16.811591021038232
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 87679143506
- CO2 Emissions (in grams): 16.8116
## Validation Metrics
- Loss: 1.544
- Rouge1: 26.107
- Rouge2: 12.267
- RougeL: 22.582
- RougeLsum: 22.590
- Gen Len: 19.956
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/gpadam/autotrain-prospero-query-training-87679143506
``` |
Panchovix/airoboros-l2-70b-gpt4-1.4.1_4bit-bpw_variants_h6-exl2 | Panchovix | 2023-09-21T16:21:13Z | 5 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-13T05:59:53Z | ---
license: other
---
4bit variants quantizations of airoboros 70b 1.4.1 (https://huggingface.co/jondurbin/airoboros-l2-70b-gpt4-1.4.1), using exllama2.
You can find 4.25bpw (main branch), 4.5bpw and 4.75bpw in each branch.
Update 21/09/2023
Re-quanted all variants with latest exllamav2 version, which fixed some measurement issues. |
Parkhat/llama2-qlora-finetunined-french | Parkhat | 2023-09-21T16:20:44Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-21T16:20:38Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
salesforce/blipdiffusion-controlnet | salesforce | 2023-09-21T15:55:24Z | 85 | 2 | diffusers | [
"diffusers",
"en",
"arxiv:2305.14720",
"license:apache-2.0",
"diffusers:BlipDiffusionControlNetPipeline",
"region:us"
]
| null | 2023-09-21T15:55:24Z | ---
license: apache-2.0
language:
- en
library_name: diffusers
---
# BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing
<!-- Provide a quick summary of what the model is/does. -->
Model card for BLIP-Diffusion, a text to image Diffusion model which enables zero-shot subject-driven generation and control-guided zero-shot generation.
The abstract from the paper is:
*Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications.*
The model is created by Dongxu Li, Junnan Li, Steven C.H. Hoi.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Original Repository:** https://github.com/salesforce/LAVIS/tree/main
- **Project Page:** https://dxli94.github.io/BLIP-Diffusion-website/
## Uses
### Zero-Shot Subject Driven Generation
```python
from diffusers.pipelines import BlipDiffusionPipeline
from diffusers.utils import load_image
import torch
blip_diffusion_pipe = BlipDiffusionPipeline.from_pretrained(
"Salesforce/blipdiffusion", torch_dtype=torch.float16
).to("cuda")
cond_subject = "dog"
tgt_subject = "dog"
text_prompt_input = "swimming underwater"
cond_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg"
)
iter_seed = 88888
guidance_scale = 7.5
num_inference_steps = 25
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
output = blip_diffusion_pipe(
text_prompt_input,
cond_image,
cond_subject,
tgt_subject,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
neg_prompt=negative_prompt,
height=512,
width=512,
).images
output[0].save("image.png")
```
Input Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg" style="width:500px;"/>
Generatred Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog_underwater.png" style="width:500px;"/>
### Controlled subject-driven generation
```python
from diffusers.pipelines import BlipDiffusionControlNetPipeline
from diffusers.utils import load_image
from controlnet_aux import CannyDetector
blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained(
"Salesforce/blipdiffusion-controlnet", torch_dtype=torch.float16
).to("cuda")
style_subject = "flower" # subject that defines the style
tgt_subject = "teapot" # subject to generate.
text_prompt = "on a marble table"
cldm_cond_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg"
).resize((512, 512))
canny = CannyDetector()
cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type="pil")
style_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg"
)
guidance_scale = 7.5
num_inference_steps = 50
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
output = blip_diffusion_pipe(
text_prompt,
style_image,
cldm_cond_image,
style_subject,
tgt_subject,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
neg_prompt=negative_prompt,
height=512,
width=512,
).images
output[0].save("image.png")
```
Input Style Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" style="width:500px;"/>
Canny Edge Input : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg" style="width:500px;"/>
Generated Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/canny_generated.png" style="width:500px;"/>
### Controlled subject-driven generation Scribble
```python
from diffusers.pipelines import BlipDiffusionControlNetPipeline
from diffusers.utils import load_image
from controlnet_aux import HEDdetector
blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained(
"Salesforce/blipdiffusion-controlnet"
)
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-scribble")
blip_diffusion_pipe.controlnet = controlnet
blip_diffusion_pipe.to("cuda")
style_subject = "flower" # subject that defines the style
tgt_subject = "bag" # subject to generate.
text_prompt = "on a table"
cldm_cond_image = load_image(
"https://huggingface.co/lllyasviel/sd-controlnet-scribble/resolve/main/images/bag.png"
).resize((512, 512))
hed = HEDdetector.from_pretrained("lllyasviel/Annotators")
cldm_cond_image = hed(cldm_cond_image)
style_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg"
)
guidance_scale = 7.5
num_inference_steps = 50
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
output = blip_diffusion_pipe(
text_prompt,
style_image,
cldm_cond_image,
style_subject,
tgt_subject,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
neg_prompt=negative_prompt,
height=512,
width=512,
).images
output[0].save("image.png")
```
Input Style Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" style="width:500px;"/>
Scribble Input : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/scribble.png" style="width:500px;"/>
Generated Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/scribble_output.png" style="width:500px;"/>
## Model Architecture
Blip-Diffusion learns a **pre-trained subject representation**. uch representation aligns with text embeddings and in the meantime also encodes the subject appearance. This allows efficient fine-tuning of the model for high-fidelity subject-driven applications, such as text-to-image generation, editing and style transfer.
To this end, they design a two-stage pre-training strategy to learn generic subject representation. In the first pre-training stage, they perform multimodal representation learning, which enforces BLIP-2 to produce text-aligned visual features based on the input image. In the second pre-training stage, they design a subject representation learning task, called prompted context generation, where the diffusion model learns to generate novel subject renditions based on the input visual features.
To achieve this, they curate pairs of input-target images with the same subject appearing in different contexts. Specifically, they synthesize input images by composing the subject with a random background. During pre-training, they feed the synthetic input image and the subject class label through BLIP-2 to obtain the multimodal embeddings as subject representation. The subject representation is then combined with a text prompt to guide the generation of the target image.

The architecture is also compatible to integrate with established techniques built on top of the diffusion model, such as ControlNet.
They attach the U-Net of the pre-trained ControlNet to that of BLIP-Diffusion via residuals. In this way, the model takes into account the input structure condition, such as edge maps and depth maps, in addition to the subject cues. Since the model inherits the architecture of the original latent diffusion model, they observe satisfying generations using off-the-shelf integration with pre-trained ControlNet without further training.
<img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/arch_controlnet.png" style="width:50%;"/>
## Citation
**BibTeX:**
If you find this repository useful in your research, please cite:
```
@misc{li2023blipdiffusion,
title={BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing},
author={Dongxu Li and Junnan Li and Steven C. H. Hoi},
year={2023},
eprint={2305.14720},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
am-infoweb/QA_SYNTH_19_SEPT_FINETUNE_1.0 | am-infoweb | 2023-09-21T15:51:29Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T15:00:50Z | ---
tags:
- generated_from_trainer
model-index:
- name: QA_SYNTH_19_SEPT_FINETUNE_1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA_SYNTH_19_SEPT_FINETUNE_1.0
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1211 | 1.0 | 1350 | 0.1318 |
| 0.0599 | 2.0 | 2700 | 0.1617 |
| 0.0571 | 3.0 | 4050 | 0.0833 |
| 0.0248 | 4.0 | 5400 | 0.0396 |
| 0.0154 | 5.0 | 6750 | 0.0911 |
| 0.0 | 6.0 | 8100 | 0.1054 |
| 0.0 | 7.0 | 9450 | 0.1086 |
| 0.0 | 8.0 | 10800 | 0.1224 |
| 0.0002 | 9.0 | 12150 | 0.1155 |
| 0.0025 | 10.0 | 13500 | 0.1182 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ShivamMangale/XLM-Roberta-base-allhiweakdap_5th_iteration_d5_d4 | ShivamMangale | 2023-09-21T15:44:38Z | 122 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T14:53:27Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-allhiweakdap_5th_iteration_d5_d4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-allhiweakdap_5th_iteration_d5_d4
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4580000000000001e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
tim-d/CurtGPT | tim-d | 2023-09-21T15:39:01Z | 17 | 1 | peft | [
"peft",
"text-generation",
"en",
"dataset:LDJnr/Puffin",
"dataset:pvduy/rm_hh_helpful_only",
"arxiv:2305.14314",
"arxiv:2305.18290",
"license:other",
"region:us"
]
| text-generation | 2023-09-21T15:24:37Z | ---
license: other
language:
- en
pipeline_tag: text-generation
datasets:
- LDJnr/Puffin
- pvduy/rm_hh_helpful_only
library_name: peft
widget:
- text: "USER: What's better, farming, or using computers (which suck)\nASSISTANT:"
---
<table>
<tr>
<td style="width: 30%; text-align: left; vertical-align: middle">
# CurtGPT
Using Microsoft's Phi 1.5 model like it was never intended.
</td>
<td style="text-align: center;">
<img src="https://github.com/tim-a-davis/silly_little_language_modeling_thing_at_utd/blob/main/curtgpt%20logo.png?raw=true" width="300" height="auto">
</td>
</tr>
</table>
# Main Procedure
This model is an adapter on [puffin phi v2](https://huggingface.co/teknium/Puffin-Phi-v2) trained using [QLoRA](https://arxiv.org/pdf/2305.14314.pdf) and [DPO](https://arxiv.org/pdf/2305.18290.pdf) on 60,000 samples from the [anthropic helpful only](https://huggingface.co/datasets/pvduy/rm_hh_helpful_only) dataset.
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0 |
ryatora/distilbert-base-uncased-finetuned-emotion | ryatora | 2023-09-21T15:36:40Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-19T12:44:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9224787080842691
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2185
- Accuracy: 0.9225
- F1: 0.9225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8423 | 1.0 | 250 | 0.3084 | 0.9065 | 0.9049 |
| 0.2493 | 2.0 | 500 | 0.2185 | 0.9225 | 0.9225 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
ShivamMangale/XLM-Roberta-base-allhiweakdap_5th_iteration_d5 | ShivamMangale | 2023-09-21T15:35:52Z | 122 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T14:45:26Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-allhiweakdap_5th_iteration_d5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-allhiweakdap_5th_iteration_d5
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3122e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
neksjgg/rav3nus | neksjgg | 2023-09-21T15:22:16Z | 0 | 0 | null | [
"streamer",
"twitch",
"ru",
"region:us"
]
| null | 2023-09-21T15:15:09Z | ---
language:
- ru
tags:
- streamer
- twitch
--- |
ShivamMangale/XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3_d2_d1 | ShivamMangale | 2023-09-21T14:52:39Z | 122 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T14:34:47Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3_d2_d1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3_d2_d1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
mann-e/mann-e_5.4 | mann-e | 2023-09-21T14:52:30Z | 3 | 0 | diffusers | [
"diffusers",
"text-to-image",
"region:us"
]
| text-to-image | 2023-09-21T12:47:14Z | ---
library_name: diffusers
pipeline_tag: text-to-image
---
# Mann-E 5.4
This repository represents what is the main brain of [Mann-E](https://manne.ir) artificial intelligence platform.
## Features
1. _LoRa support_. In previous versions, most of LoRa models weren't working perfectly with the model.
2. _More coherent results_. Compared to the old versions, this version has more "midjourney" feel to its outputs.
3. _New License_. Unlike old versions this one isn't licensed undet MIT, we decided to go with our own license.
## Samples
<span align="center">
<img src="https://huggingface.co/mann-e/mann-e_5.4/resolve/main/grid-1.png" width=512px />
<br/>
<img src="https://huggingface.co/mann-e/mann-e_5.4/resolve/main/grid-2.png" width=512px />
<br/>
<img src="https://huggingface.co/mann-e/mann-e_5.4/resolve/main/grid-3.png" width=512px />
<br/>
<img src="https://huggingface.co/mann-e/mann-e_5.4/resolve/main/grid-4.png" width=512px />
<br/>
<img src="https://huggingface.co/mann-e/mann-e_5.4/resolve/main/grid-5.png" width=512px />
</span>
## License
This software and associated checkpoints are provided by Mann-E for educational and non-commercial use only. By accessing or using this software and checkpoints, you agree to the following terms and conditions:
1. Access and Use:
- You are granted the right to access and use the source code and checkpoints for educational and non-commercial purposes.
2. Modification and Distribution:
- You may modify and distribute the source code and checkpoints solely for educational and non-commercial purposes, provided that you retain this license notice.
3. Commercial Use:
- Commercial use of this software and checkpoints is strictly prohibited without the explicit written consent of the Copyright Holder.
4. Fine-tuning of Checkpoints:
- You may not fine-tune or modify the provided checkpoints without obtaining the express written consent of the Copyright Holder.
5. No Warranty:
- This software and checkpoints are provided "as is" without any warranty. The Copyright Holder shall not be liable for any damages or liabilities arising out of the use or inability to use the software and checkpoints.
6. Termination:
- This license is effective until terminated by the Copyright Holder. Your rights under this license will terminate automatically without notice from the Copyright Holder if you fail to comply with any term or condition of this license.
If you do not agree to these terms and conditions or do not have the legal authority to bind yourself, you may not use, modify, or distribute this software and checkpoints.
For inquiries regarding commercial use or fine-tuning of checkpoints, please contact Mann-E.
|
nickypro/tinyllama-15M-fp32 | nickypro | 2023-09-21T14:50:50Z | 152 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-16T17:23:46Z | ---
license: mit
---
This is the Float32 15M parameter Llama 2 architecture model trained on the TinyStories dataset.
These are converted from
[karpathy/tinyllamas](https://huggingface.co/karpathy/tinyllamas).
See the [llama2.c](https://github.com/karpathy/llama2.c) project for more details. |
anjakuzev/13b_200 | anjakuzev | 2023-09-21T14:50:40Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-21T14:50:37Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
yunosuken/results | yunosuken | 2023-09-21T14:50:34Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:tohoku-nlp/bert-large-japanese-v2",
"base_model:finetune:tohoku-nlp/bert-large-japanese-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-13T14:15:12Z | ---
license: apache-2.0
base_model: cl-tohoku/bert-large-japanese-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-large-japanease-v2-gpt4-relevance-learned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-japanease-v2-gpt4-relevance-learned
This model is a fine-tuned version of [cl-tohoku/bert-large-japanese-v2](https://huggingface.co/cl-tohoku/bert-large-japanese-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2693
- Accuracy: 0.885
- F1: 0.8788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 3.3692 | 1.0 | 563 | 3.2122 | 0.872 | 0.8560 |
| 3.0963 | 2.0 | 1126 | 3.1045 | 0.866 | 0.8625 |
| 2.8698 | 3.0 | 1689 | 3.1410 | 0.882 | 0.8755 |
| 2.6212 | 4.0 | 2252 | 3.2119 | 0.876 | 0.8702 |
| 2.407 | 5.0 | 2815 | 3.2693 | 0.885 | 0.8788 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
nickypro/tinyllama-110M-fp32 | nickypro | 2023-09-21T14:50:10Z | 166 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-16T17:26:31Z | ---
license: mit
---
This is the float32 110M parameter Llama 2 architecture model trained on the TinyStories dataset.
These are converted from
[karpathy/tinyllamas](https://huggingface.co/karpathy/tinyllamas).
See the [llama2.c](https://github.com/karpathy/llama2.c) project for more details. |
ayushtues/blipdiffusion | ayushtues | 2023-09-21T14:44:10Z | 7 | 0 | diffusers | [
"diffusers",
"safetensors",
"en",
"arxiv:2305.14720",
"license:apache-2.0",
"diffusers:BlipDiffusionPipeline",
"region:us"
]
| null | 2023-08-07T05:45:01Z | ---
license: apache-2.0
language:
- en
library_name: diffusers
---
# BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing
<!-- Provide a quick summary of what the model is/does. -->
Model card for BLIP-Diffusion, a text to image Diffusion model which enables zero-shot subject-driven generation and control-guided zero-shot generation.
The abstract from the paper is:
*Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications.*
The model is created by Dongxu Li, Junnan Li, Steven C.H. Hoi.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Original Repository:** https://github.com/salesforce/LAVIS/tree/main
- **Project Page:** https://dxli94.github.io/BLIP-Diffusion-website/
## Uses
### Zero-Shot Subject Driven Generation
```python
from diffusers.pipelines import BlipDiffusionPipeline
from diffusers.utils import load_image
import torch
blip_diffusion_pipe = BlipDiffusionPipeline.from_pretrained(
"Salesforce/blipdiffusion", torch_dtype=torch.float16
).to("cuda")
cond_subject = "dog"
tgt_subject = "dog"
text_prompt_input = "swimming underwater"
cond_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg"
)
iter_seed = 88888
guidance_scale = 7.5
num_inference_steps = 25
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
output = blip_diffusion_pipe(
text_prompt_input,
cond_image,
cond_subject,
tgt_subject,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
neg_prompt=negative_prompt,
height=512,
width=512,
).images
output[0].save("image.png")
```
Input Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg" style="width:500px;"/>
Generatred Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog_underwater.png" style="width:500px;"/>
### Controlled subject-driven generation
```python
from diffusers.pipelines import BlipDiffusionControlNetPipeline
from diffusers.utils import load_image
from controlnet_aux import CannyDetector
blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained(
"Salesforce/blipdiffusion-controlnet", torch_dtype=torch.float16
).to("cuda")
style_subject = "flower" # subject that defines the style
tgt_subject = "teapot" # subject to generate.
text_prompt = "on a marble table"
cldm_cond_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg"
).resize((512, 512))
canny = CannyDetector()
cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type="pil")
style_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg"
)
guidance_scale = 7.5
num_inference_steps = 50
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
output = blip_diffusion_pipe(
text_prompt,
style_image,
cldm_cond_image,
style_subject,
tgt_subject,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
neg_prompt=negative_prompt,
height=512,
width=512,
).images
output[0].save("image.png")
```
Input Style Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" style="width:500px;"/>
Canny Edge Input : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg" style="width:500px;"/>
Generated Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/canny_generated.png" style="width:500px;"/>
### Controlled subject-driven generation Scribble
```python
from diffusers.pipelines import BlipDiffusionControlNetPipeline
from diffusers.utils import load_image
from controlnet_aux import HEDdetector
blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained(
"Salesforce/blipdiffusion-controlnet"
)
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-scribble")
blip_diffusion_pipe.controlnet = controlnet
blip_diffusion_pipe.to("cuda")
style_subject = "flower" # subject that defines the style
tgt_subject = "bag" # subject to generate.
text_prompt = "on a table"
cldm_cond_image = load_image(
"https://huggingface.co/lllyasviel/sd-controlnet-scribble/resolve/main/images/bag.png"
).resize((512, 512))
hed = HEDdetector.from_pretrained("lllyasviel/Annotators")
cldm_cond_image = hed(cldm_cond_image)
style_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg"
)
guidance_scale = 7.5
num_inference_steps = 50
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
output = blip_diffusion_pipe(
text_prompt,
style_image,
cldm_cond_image,
style_subject,
tgt_subject,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
neg_prompt=negative_prompt,
height=512,
width=512,
).images
output[0].save("image.png")
```
Input Style Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" style="width:500px;"/>
Scribble Input : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/scribble.png" style="width:500px;"/>
Generated Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/scribble_output.png" style="width:500px;"/>
## Model Architecture
Blip-Diffusion learns a **pre-trained subject representation**. uch representation aligns with text embeddings and in the meantime also encodes the subject appearance. This allows efficient fine-tuning of the model for high-fidelity subject-driven applications, such as text-to-image generation, editing and style transfer.
To this end, they design a two-stage pre-training strategy to learn generic subject representation. In the first pre-training stage, they perform multimodal representation learning, which enforces BLIP-2 to produce text-aligned visual features based on the input image. In the second pre-training stage, they design a subject representation learning task, called prompted context generation, where the diffusion model learns to generate novel subject renditions based on the input visual features.
To achieve this, they curate pairs of input-target images with the same subject appearing in different contexts. Specifically, they synthesize input images by composing the subject with a random background. During pre-training, they feed the synthetic input image and the subject class label through BLIP-2 to obtain the multimodal embeddings as subject representation. The subject representation is then combined with a text prompt to guide the generation of the target image.

The architecture is also compatible to integrate with established techniques built on top of the diffusion model, such as ControlNet.
They attach the U-Net of the pre-trained ControlNet to that of BLIP-Diffusion via residuals. In this way, the model takes into account the input structure condition, such as edge maps and depth maps, in addition to the subject cues. Since the model inherits the architecture of the original latent diffusion model, they observe satisfying generations using off-the-shelf integration with pre-trained ControlNet without further training.
<img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/arch_controlnet.png" style="width:50%;"/>
## Citation
**BibTeX:**
If you find this repository useful in your research, please cite:
```
@misc{li2023blipdiffusion,
title={BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing},
author={Dongxu Li and Junnan Li and Steven C. H. Hoi},
year={2023},
eprint={2305.14720},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
ShivamMangale/XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3_d2_d1_d0-hq | ShivamMangale | 2023-09-21T14:42:50Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T14:20:18Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3_d2_d1_d0-hq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3_d2_d1_d0-hq
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
newronai/clma2-13b-Chat-Adapter-NasdaqBalanced-3epoch | newronai | 2023-09-21T14:41:07Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-21T14:41:00Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
jonas-luehrs/chembert_cased-textCLS-RHEOLOGY | jonas-luehrs | 2023-09-21T14:40:21Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:jiangg/chembert_cased",
"base_model:finetune:jiangg/chembert_cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-21T14:34:07Z | ---
base_model: jiangg/chembert_cased
tags:
- generated_from_trainer
metrics:
- f1
- precision
- recall
- accuracy
model-index:
- name: chembert_cased-textCLS-RHEOLOGY
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chembert_cased-textCLS-RHEOLOGY
This model is a fine-tuned version of [jiangg/chembert_cased](https://huggingface.co/jiangg/chembert_cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6766
- F1: 0.7253
- Precision: 0.7446
- Recall: 0.7407
- Accuracy: 0.7407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|:--------:|
| 1.2479 | 1.0 | 46 | 0.9758 | 0.6185 | 0.5919 | 0.6605 | 0.6605 |
| 0.8039 | 2.0 | 92 | 0.7210 | 0.7277 | 0.7472 | 0.7407 | 0.7407 |
| 0.5982 | 3.0 | 138 | 0.6766 | 0.7253 | 0.7446 | 0.7407 | 0.7407 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
LarryAIDraw/takina_inoue_v1 | LarryAIDraw | 2023-09-21T14:35:35Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-21T13:33:54Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/148903/takina-inoue-or-lycoris-recoil-5-outfits |
ShivamMangale/XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3_d2 | ShivamMangale | 2023-09-21T14:34:44Z | 133 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T14:23:28Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3_d2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3_d2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.8e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
FredericProtat/output | FredericProtat | 2023-09-21T14:33:01Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"dataset:decision_transformer_gym_replay",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-21T14:32:59Z | ---
tags:
- generated_from_trainer
datasets:
- decision_transformer_gym_replay
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [](https://huggingface.co/) on the decision_transformer_gym_replay dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 120
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ramboind/infra | ramboind | 2023-09-21T14:26:54Z | 0 | 0 | null | [
"license:cc-by-nc-nd-3.0",
"region:us"
]
| null | 2023-09-21T14:26:54Z | ---
license: cc-by-nc-nd-3.0
---
|
ShivamMangale/XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3_d2_d1-hq | ShivamMangale | 2023-09-21T14:20:17Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T14:06:26Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3_d2_d1-hq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3_d2_d1-hq
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
yagizergil/llama2-yagiz | yagizergil | 2023-09-21T14:12:17Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-21T12:49:31Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
ShivamMangale/XLM-Roberta-base-all_hi_weakdap_3rd_iteration_d3_d2_d1_d0 | ShivamMangale | 2023-09-21T14:02:28Z | 122 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T09:57:35Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-all_hi_weakdap_3rd_iteration_d3_d2_d1_d0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-all_hi_weakdap_3rd_iteration_d3_d2_d1_d0
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ShivamMangale/XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3-hq | ShivamMangale | 2023-09-21T13:56:46Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T13:48:23Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3-hq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4_d3-hq
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.62e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jmgb0127/bloom-lotr | jmgb0127 | 2023-09-21T13:56:28Z | 1 | 0 | peft | [
"peft",
"base_model:bigscience/bloom-3b",
"base_model:adapter:bigscience/bloom-3b",
"region:us"
]
| null | 2023-08-28T00:11:48Z | ---
library_name: peft
base_model: bigscience/bloom-3b
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
cedric7ginobili/margaux | cedric7ginobili | 2023-09-21T13:56:23Z | 2 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-20T12:04:11Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a margauxmafille person
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
swastikhurana/q-Taxi-v1 | swastikhurana | 2023-09-21T13:54:37Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-21T13:54:35Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="swastikhurana/q-Taxi-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Coroseven/TEST | Coroseven | 2023-09-21T13:49:30Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-09-07T14:18:20Z | TEST 2 este un model combinat intre Primary model (A) - V3.0 Nordrin_little(诺德琳little); Secondary model (B) - aamAnyloraAnimeMixAnime_v1 ; Tertiary model (C) - aingdiffusion_v92 la Multiplier (M) - 0.5 Weighted sum
TEST 3 este un model combinat intre Primary model (A) - aamAnyloraAnimeMixAnime_v1 ; Secondary model (B) - V3.0 Nordrin_little(诺德琳little); la Multiplier (M) - 0.5 Weighted sum
TEST 5 este un model combinat intre Primary model (A) - aamAnyloraAnimeMixAnime_v1 ; Secondary model (B) - V3.0 Nordrin_little(诺德琳little); Tertiary model (C) - aingdiffusion_v92 la Multiplier (M) - 0.3 Weighted sum
TEST 6 este un model combinat intre Primary model (A) - aamAnyloraAnimeMixAnime_v1 ; Secondary model (B) - BlueAilandMix (blueailandmix_v11) ; la Multiplier (M) - 0.4 Weighted sum
TEST 12 este un model combinat intre Primary model (A) - aamAnyloraAnimeMixAnime_v1 ; Secondary model (B) - Sudachi (sudachi_v1.0) ; la Multiplier (M) - 0.5 Weighted sum
TEST 13 este un model combinat intre Primary model (A) - TEST 12 ; Secondary model (B) - AingDiffusion (AingDiffusion_v9.2) ; la Multiplier (M) - 0.4 Weighted sum |
ShivamMangale/XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4-hq | ShivamMangale | 2023-09-21T13:48:23Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T13:40:12Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4-hq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-all_hi_weakdap_4th_iteration_d4-hq
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4580000000000001e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
nichonifroa/bert-finetuned-squad | nichonifroa | 2023-09-21T13:47:34Z | 122 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:nichonifroa/bert-finetuned-squad",
"base_model:finetune:nichonifroa/bert-finetuned-squad",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T10:04:51Z | ---
base_model: nichonifroa/bert-finetuned-squad
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [nichonifroa/bert-finetuned-squad](https://huggingface.co/nichonifroa/bert-finetuned-squad) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Ibrahim-Alam/finetuning-bert-base-uncased-on-Cornell_sentiment | Ibrahim-Alam | 2023-09-21T13:44:05Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-21T13:41:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-bert-base-uncased-on-Cornell_sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-bert-base-uncased-on-Cornell_sentiment
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3582
- Accuracy: 0.8626
- F1: 0.8542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
BigR/Lunar_lander | BigR | 2023-09-21T13:41:04Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-21T13:40:44Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.02 +/- 15.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
fsuarez/autotrain-logo-identifier-90194144191 | fsuarez | 2023-09-21T13:38:51Z | 184 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:fsuarez/autotrain-data-logo-identifier",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-19T14:59:47Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- fsuarez/autotrain-data-logo-identifier
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.060824697813101125
---
# 📒 logo-identifier-model
This model has been trained on a dataset called "LogoIdentifier" for multi-class classification of logos from 57 renowned brands and companies. These brands encompass a wide spectrum of industries and recognition, ranging from global giants like Coca-Cola, Coleman, Google, IBM, Nike, Pepsi, and many others. Each brand is thoughtfully organized into its designated subfolder, housing a comprehensive set of logo images for precise and accurate classification. Whether you're identifying iconic logos or exploring the branding diversity of these 57 famous names, this model is your go-to solution for logo recognition and classification.
# 🧪 Dataset Content
- The dataset includes logos from various brands and companies.
- The dataset is organized into subfolders, each corresponding to a specific brand or company.
- It contains a wide range of brand logos, including Acer, Acura, Adidas, Samsung, Lenovo, McDonald's, Java, and many more.
- Each brand or company in the dataset is associated with a numerical value, likely representing the number of images available for that brand.
The model has been trained to recognize and classify logos into their respective brand categories based on the images provided in the dataset.
| Company | Quantity of images |
| ----------------- | ------------------ |
| Acer | 67 |
| Acura | 74 |
| Addidas | 90 |
| Ades | 36 |
| Adio | 63 |
| Cadillac | 69 |
| CalvinKlein | 65 |
| Canon | 59 |
| Cocacola | 40 |
| CocaColaZero | 91 |
| Coleman | 57 |
| Converse | 60 |
| CornFlakes | 62 |
| DominossPizza | 99 |
| Excel | 88 |
| Gillette | 86 |
| GMC | 75 |
| Google | 93 |
| HardRockCafe | 93 |
| HBO | 103 |
| Heineken | 84 |
| HewlettPackard | 81 |
| Hp | 87 |
| Huawei | 84 |
| Hyundai | 84 |
| IBM | 84 |
| Java | 62 |
| KFC | 84 |
| Kia | 76 |
| Kingston | 79 |
| Lenovo | 82 |
| LG | 95 |
| Lipton | 94 |
| Mattel | 77 |
| McDonalds | 98 |
| MercedesBenz | 94 |
| Motorola | 86 |
| Nestle | 94 |
| Nickelodeon | 74 |
| Nike | 50 |
| Pennzoil | 82 |
| Pepsi | 93 |
| Peugeot | 60 |
| Porsche | 71 |
| Samsung | 96 |
| SchneiderElectric | 42 |
| Shell | 58 |
To use this model for brand logo identification, you can make use of the Hugging Face Transformers library and load the model using its model ID (90194144191). You can then input an image of a brand logo, and the model should be able to predict the brand it belongs to based on its training.
# 🤗 Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 90194144191
- CO2 Emissions (in grams): 0.0608
## 📐 Validation Metrics
- Loss: 0.300
- Accuracy: 0.924
- Macro F1: 0.924
- Micro F1: 0.924
- Weighted F1: 0.922
- Macro Precision: 0.930
- Micro Precision: 0.924
- Weighted Precision: 0.928
- Macro Recall: 0.924
- Micro Recall: 0.924
- Weighted Recall: 0.924 |
ShivamMangale/XLM-Roberta-base-all_hi_weakdap_3rd_iteration_d3_d2_d1_d0-hq | ShivamMangale | 2023-09-21T13:37:37Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T13:15:08Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-all_hi_weakdap_3rd_iteration_d3_d2_d1_d0-hq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-all_hi_weakdap_3rd_iteration_d3_d2_d1_d0-hq
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
LarryAIDraw/Char_Honkai_Raiden_Mei_adult | LarryAIDraw | 2023-09-21T13:36:19Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-21T13:34:19Z | ---
license: creativeml-openrail-m
---
|
sanctia/lora-sd-finesse | sanctia | 2023-09-21T13:33:39Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-20T02:37:21Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - sanctia/lora-sd-finesse
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the sanctia/finesse-image-generation dataset. You can find some example images in the following.
- Model and architecture details: https://www.notion.so/Design-document-Finesse-Generative-Challenge-4ed87ea624f84ff5a9ac09dc21885366
- Wandb report: https://wandb.ai/hpml3/text2image-fine-tune/runs/cdyy9un3?workspace=user-sanctia




|
srushtibhavsar/squad_bloom_3b | srushtibhavsar | 2023-09-21T13:29:25Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-21T13:29:23Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
chanifrusydi/t5-dialogue-summarization | chanifrusydi | 2023-09-21T13:27:14Z | 134 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"dataset:samsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-06-08T05:08:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- accuracy
pipeline_tag: summarization
base_model: t5-small
model-index:
- name: t5-dialogue-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-dialogue-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the samsum dataset.
dataset:
type: {summarization}
name: {samsum}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1 |
CyberHarem/nishikawa_honami_idolmastercinderellagirls | CyberHarem | 2023-09-21T13:13:40Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/nishikawa_honami_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-21T13:02:46Z | ---
license: mit
datasets:
- CyberHarem/nishikawa_honami_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of nishikawa_honami_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4080, you need to download `4080/nishikawa_honami_idolmastercinderellagirls.pt` as the embedding and `4080/nishikawa_honami_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4080**, with the score of 0.919. The trigger words are:
1. `nishikawa_honami_idolmastercinderellagirls`
2. `long_hair, brown_hair, green_eyes, earrings, jewelry, smile, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.903 | [Download](5100/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.895 | [Download](4760/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.912 | [Download](4420/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| **4080** | **0.919** | [**Download**](4080/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.831 | [Download](3740/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.872 | [Download](3400/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.882 | [Download](3060/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.909 | [Download](2720/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.857 | [Download](2380/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.893 | [Download](2040/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.874 | [Download](1700/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.860 | [Download](1360/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.877 | [Download](1020/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.753 | [Download](680/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.579 | [Download](340/nishikawa_honami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
ShivamMangale/XLM-Roberta-base-all_hi_weakdap_3rd_iteration_d3_d2 | ShivamMangale | 2023-09-21T13:12:54Z | 123 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T09:29:28Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-all_hi_weakdap_3rd_iteration_d3_d2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-all_hi_weakdap_3rd_iteration_d3_d2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.8e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ShivamMangale/XLM-Roberta-base-all_hi_weakdap_3rd_iteration_d3 | ShivamMangale | 2023-09-21T13:01:35Z | 122 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T09:20:45Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-all_hi_weakdap_3rd_iteration_d3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-all_hi_weakdap_3rd_iteration_d3
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.62e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
aminh/squad-falcon-7b | aminh | 2023-09-21T12:57:14Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-21T12:57:06Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
MattStammers/appo-mujoco-doublependulum | MattStammers | 2023-09-21T12:56:15Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-21T12:56:12Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: mujoco_doublependulum
type: mujoco_doublependulum
metrics:
- type: mean_reward
value: 6568.13 +/- 4264.02
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **mujoco_doublependulum** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r MattStammers/appo-mujoco-doublependulum
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m sf_examples.mujoco.enjoy_mujoco --algo=APPO --env=mujoco_doublependulum --train_dir=./train_dir --experiment=appo-mujoco-doublependulum
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m sf_examples.mujoco.train_mujoco --algo=APPO --env=mujoco_doublependulum --train_dir=./train_dir --experiment=appo-mujoco-doublependulum --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
antphb/pretrain-gpt2-large | antphb | 2023-09-21T12:53:55Z | 132 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:NlpHUST/gpt2-vietnamese",
"base_model:finetune:NlpHUST/gpt2-vietnamese",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-21T08:31:12Z | ---
base_model: NlpHUST/gpt2-vietnamese
tags:
- generated_from_trainer
model-index:
- name: pretrain-gpt2-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pretrain-gpt2-large
This model is a fine-tuned version of [NlpHUST/gpt2-vietnamese](https://huggingface.co/NlpHUST/gpt2-vietnamese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 70
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0875 | 13.05 | 500 | 2.6828 |
| 2.5739 | 26.1 | 1000 | 2.5363 |
| 2.4573 | 39.15 | 1500 | 2.4643 |
| 2.3962 | 52.2 | 2000 | 2.4294 |
| 2.3662 | 65.25 | 2500 | 2.4155 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
duwuonline/my-ielts | duwuonline | 2023-09-21T12:49:48Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-21T12:20:17Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: my-ielts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-ielts
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
BlahBlah1/LLama2 | BlahBlah1 | 2023-09-21T12:45:20Z | 2 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2023-09-21T12:10:06Z | ---
license: apache-2.0
---
Openbuddy's conversion of GGML model to GGUF as per new AUG 21 update
of llama.cpp supporting GGML files
A quantised version of the model. |
MThonar/Linkk | MThonar | 2023-09-21T12:33:31Z | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-21T12:27:35Z | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth
---
# DreamBooth model of Link trained by MThonar on the MThonar/link dataset.
This is a Stable Diffusion model fine-tuned with Dreambooth on images of Linkk. It can be used by modifying the `instance_prompt`: **a photo of Linkk**
## Description
This is a Stable Diffusion model fine-tuned on images of Linkk.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('MThonar/Linkk')
image = pipeline().images[0]
image
```
|
ShivamMangale/XLM-Roberta-base-all_hi_weakdap_2nd_iteration_d2_d1_d0 | ShivamMangale | 2023-09-21T12:32:06Z | 122 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T08:46:47Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-all_hi_weakdap_2nd_iteration_d2_d1_d0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-all_hi_weakdap_2nd_iteration_d2_d1_d0
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jphme/phi-1_5_Wizard_Vicuna_uncensored | jphme | 2023-09-21T12:23:23Z | 69 | 27 | transformers | [
"transformers",
"pytorch",
"mixformer-sequential",
"text-generation",
"phi",
"phi-1_5",
"english",
"custom_code",
"en",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-09-12T17:30:57Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
inference: true
tags:
- pytorch
- phi
- phi-1_5
- english
datasets:
- ehartford/wizard_vicuna_70k_unfiltered
---
# Phi 1.5 Wizard Vicuna Experimental
Experimental Finetune on Microsoft's [Phi 1.5](https://huggingface.co/microsoft/phi-1_5).
This is highly experimental, only trained on a subset of the 70k Wizard Vicuna dataset and not meant for production use.
This model also runs reasonably fast on CPU!
Will update with later checkpoints later.
# Prompt Format
ShareGPT / Vicuna (without newlines):
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: This is a question? ASSISTANT: Here is my answer"
```
# Code Example
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("jphme/phi-1_5_wizzard_vicuna_uncensored", trust_remote_code=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("jphme/phi-1_5_wizzard_vicuna_uncensored", trust_remote_code=True, torch_dtype=torch.bfloat16)
prompt_template=("A chat between a curious user and an artificial intelligence assistant. "
"The assistant gives helpful, detailed, and polite answers to the user's questions. "
"USER: {prompt} ASSISTANT:")
inputs = tokenizer(
prompt_template.format(prompt="What is 1+1?"),
return_tensors="pt", return_attention_mask=False).to('cuda')
outputs = model.generate(
**inputs, max_length=200,
do_sample=True,
temperature=0.5,
top_p=0.95,
use_cache=True,
repetition_penalty=1.1,
#can try to uncomment this if model doesn't end
#eos_token_id=tokenizer.eos_token_id
)
text = tokenizer.batch_decode(outputs,skip_special_tokens=True)[0]
print(text)
```
The model doesn't stop by itself (this could be an issue due to eos/bos tokens being identical), so you can try to explicitly set the `eos_token_id` (see above) or use a custom stopping criteria.
You can remove `.to("cuda")` to let the model run without a GPU!
# Examples
> A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Who was president of the USA in 1995? ASSISTANT: The president of the United States at that time was Bill Clinton (...)
> A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Write a python function that returns the fibonacci sequence ASSISTANT:The Fibonacci sequence is a series of numbers where each number after the first two are obtained by adding the two preceding ones together (starting from 0). So the sequence goes 0, 1, 1, 2, 3, 5, 8, 13, 21...
Here's a Python function that generates the nth term in the Fibonacci sequence:
```python
def fibonacci(n):
if n <= 1:
return n
else:
f = [0,1]
for i in range(2, n+1):
f.append(f[-1]+f[-2])
return f[-1]
```
> This function takes as input `n`, which represents the (...)
(this function works with corrected intendation)
> A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: What is (500+12)/4? ASSISTANT:The answer to this expression is 7.5. (...)
Some potential here ;-)...
# Thanks
Thanks to Microsoft for releaseing Phi 1.5, [Eric Hartford](https://huggingface.co/ehartford) for releasing the Wizard Vicuna dataset, [Winglian](https://huggingface.co/winglian) for his great work on Axolotl that I use for finetuning and [Teknium](https://huggingface.co/teknium) for some Phi finetuning discussion.
# License
The original licenses of the dataset and model applies. No warranty whatsoever, this model is only intended for research purposes. |
bavolesy/Reinforce-Cartpole-v1 | bavolesy | 2023-09-21T12:19:12Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-21T12:18:58Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Volkan/photography-upscalers | Volkan | 2023-09-21T12:15:58Z | 0 | 0 | null | [
"license:cc-by-nc-nd-4.0",
"region:us"
]
| null | 2023-09-05T19:05:51Z | ---
license: cc-by-nc-nd-4.0
---
|
MattStammers/appo-mujoco-pendulum | MattStammers | 2023-09-21T12:11:43Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-21T12:11:40Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: mujoco_pendulum
type: mujoco_pendulum
metrics:
- type: mean_reward
value: 1000.00 +/- 0.00
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **mujoco_pendulum** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r MattStammers/appo-mujoco-pendulum
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m sf_examples.mujoco.enjoy_mujoco --algo=APPO --env=mujoco_pendulum --train_dir=./train_dir --experiment=appo-mujoco-pendulum
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m sf_examples.mujoco.train_mujoco --algo=APPO --env=mujoco_pendulum --train_dir=./train_dir --experiment=appo-mujoco-pendulum --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Rexe/Deci-Decicoder-1b-qlora-coder | Rexe | 2023-09-21T12:07:27Z | 3 | 0 | peft | [
"peft",
"base_model:Deci/DeciCoder-1b",
"base_model:adapter:Deci/DeciCoder-1b",
"region:us"
]
| null | 2023-09-19T01:30:55Z | ---
library_name: peft
base_model: Deci/DeciCoder-1b
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
ShivamMangale/XLM-Roberta-base-all_hi_weakdap_2nd_iteration_d2_d1 | ShivamMangale | 2023-09-21T12:00:23Z | 133 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T08:29:28Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-all_hi_weakdap_2nd_iteration_d2_d1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-all_hi_weakdap_2nd_iteration_d2_d1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
MattStammers/appo-mujoco-walker | MattStammers | 2023-09-21T11:43:35Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-20T17:00:51Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: ATQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: mujoco_walker
type: mujoco_walker
metrics:
- type: mean_reward
value: 3553.55 +/- 944.12
name: mean_reward
verified: false
---
A(n) **ATQC** model trained on the **mujoco_walker** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r MattStammers/appo-mujoco-swimmer
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m sf_examples.mujoco.enjoy_mujoco --algo=ATQC --env=mujoco_walker --train_dir=./train_dir --experiment=appo-mujoco-swimmer
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m sf_examples.mujoco.train_mujoco --algo=ATQC --env=mujoco_walker --train_dir=./train_dir --experiment=appo-mujoco-swimmer --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
ShivamMangale/XLM-Roberta-base-all_hi_weakdap_1st_iteration_d1_d0-hq | ShivamMangale | 2023-09-21T11:36:49Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-21T11:20:16Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-all_hi_weakdap_1st_iteration_d1_d0-hq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-all_hi_weakdap_1st_iteration_d1_d0-hq
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Ori/lama-2-13b-peft-strategyqa-no-retrieval-1-v2-seed-3 | Ori | 2023-09-21T11:36:27Z | 3 | 0 | peft | [
"peft",
"safetensors",
"region:us"
]
| null | 2023-09-21T11:34:07Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
dss107/mp_base | dss107 | 2023-09-21T11:33:48Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-09-21T11:32:21Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# dss107/mp_base
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("dss107/mp_base")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
haseong8012/wav2vec2-large-xlsr-53_ko2 | haseong8012 | 2023-09-21T11:30:32Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:zeroth_korean",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-20T11:27:30Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
datasets:
- zeroth_korean
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-53-fine-tune_korean_byAILAB2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: zeroth_korean
type: zeroth_korean
config: clean
split: test
args: clean
metrics:
- name: Wer
type: wer
value: 0.9067911459117602
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-fine-tune_korean_byAILAB2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the zeroth_korean dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4929
- Wer: 0.9068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.99 | 38 | 54.4059 | 1.0 |
| No log | 2.0 | 77 | 38.8388 | 1.0 |
| No log | 2.99 | 115 | 24.1740 | 1.0 |
| No log | 4.0 | 154 | 16.4733 | 1.0 |
| No log | 4.99 | 192 | 10.1900 | 1.0 |
| No log | 6.0 | 231 | 6.0076 | 1.0 |
| No log | 6.99 | 269 | 4.8990 | 1.0 |
| No log | 8.0 | 308 | 4.8442 | 1.0 |
| No log | 8.99 | 346 | 4.8284 | 1.0 |
| No log | 10.0 | 385 | 4.8316 | 1.0 |
| 16.886 | 10.99 | 423 | 4.8164 | 1.0 |
| 16.886 | 12.0 | 462 | 4.7815 | 1.0 |
| 16.886 | 12.99 | 500 | 4.7204 | 0.9989 |
| 16.886 | 14.0 | 539 | 4.6842 | 0.9989 |
| 16.886 | 14.99 | 577 | 4.6641 | 0.9994 |
| 16.886 | 16.0 | 616 | 4.6527 | 1.0 |
| 16.886 | 16.99 | 654 | 4.6745 | 0.9992 |
| 16.886 | 18.0 | 693 | 4.6591 | 1.0 |
| 16.886 | 18.99 | 731 | 4.6506 | 0.9997 |
| 16.886 | 20.0 | 770 | 4.6719 | 0.9967 |
| 4.4391 | 20.99 | 808 | 4.6067 | 0.9968 |
| 4.4391 | 22.0 | 847 | 4.5748 | 0.9968 |
| 4.4391 | 22.99 | 885 | 4.5166 | 0.9962 |
| 4.4391 | 24.0 | 924 | 4.3783 | 0.9926 |
| 4.4391 | 24.99 | 962 | 4.2711 | 0.9913 |
| 4.4391 | 26.0 | 1001 | 3.6515 | 1.0030 |
| 4.4391 | 26.99 | 1039 | 3.1057 | 1.0640 |
| 4.4391 | 28.0 | 1078 | 2.6593 | 1.0742 |
| 4.4391 | 28.99 | 1116 | 2.4071 | 1.0587 |
| 4.4391 | 30.0 | 1155 | 2.2041 | 1.0379 |
| 4.4391 | 30.99 | 1193 | 2.0495 | 1.0319 |
| 3.1722 | 32.0 | 1232 | 1.9754 | 1.0459 |
| 3.1722 | 32.99 | 1270 | 1.8658 | 0.9968 |
| 3.1722 | 34.0 | 1309 | 1.7887 | 0.9883 |
| 3.1722 | 34.99 | 1347 | 1.7560 | 0.9776 |
| 3.1722 | 36.0 | 1386 | 1.6987 | 0.9675 |
| 3.1722 | 36.99 | 1424 | 1.6513 | 0.9443 |
| 3.1722 | 38.0 | 1463 | 1.6187 | 0.9473 |
| 3.1722 | 38.99 | 1501 | 1.6210 | 0.9408 |
| 3.1722 | 40.0 | 1540 | 1.5957 | 0.9458 |
| 3.1722 | 40.99 | 1578 | 1.5673 | 0.9246 |
| 1.2364 | 42.0 | 1617 | 1.5748 | 0.9286 |
| 1.2364 | 42.99 | 1655 | 1.5333 | 0.9217 |
| 1.2364 | 44.0 | 1694 | 1.5138 | 0.9100 |
| 1.2364 | 44.99 | 1732 | 1.5244 | 0.9223 |
| 1.2364 | 46.0 | 1771 | 1.5041 | 0.9080 |
| 1.2364 | 46.99 | 1809 | 1.5151 | 0.9155 |
| 1.2364 | 48.0 | 1848 | 1.4955 | 0.9077 |
| 1.2364 | 48.99 | 1886 | 1.4924 | 0.9065 |
| 1.2364 | 49.35 | 1900 | 1.4929 | 0.9068 |
### Framework versions
- Transformers 4.33.2
- Pytorch 1.12.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ashishpatel26/phi-1_5-finetuned-dialogstudio | ashishpatel26 | 2023-09-21T11:27:33Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:dialogstudio",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
]
| null | 2023-09-21T10:43:09Z | ---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
datasets:
- dialogstudio
model-index:
- name: phi-1_5-finetuned-dialogstudio
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-dialogstudio
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the dialogstudio dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 3
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
lora-library/zac | lora-library | 2023-09-21T11:27:25Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stablediffusionapi/majicmixrealistic",
"base_model:adapter:stablediffusionapi/majicmixrealistic",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-21T11:27:25Z | ---
license: creativeml-openrail-m
base_model: stablediffusionapi/majicmixrealistic
instance_prompt: z4c
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - zac
These are LoRA adaption weights for [stablediffusionapi/majicmixrealistic](https://huggingface.co/stablediffusionapi/majicmixrealistic). The weights were trained on the instance prompt "z4c" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
|
ldos/text_shortening_model_v47 | ldos | 2023-09-21T11:25:38Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-xsum",
"base_model:finetune:facebook/bart-large-xsum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-21T10:04:22Z | ---
license: mit
base_model: facebook/bart-large-xsum
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: text_shortening_model_v47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v47
This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3912
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Bert precision: 0.6047
- Bert recall: 0.5681
- Average word count: 1.0
- Max word count: 1
- Min word count: 1
- Average token count: 12.0
- % shortened texts with length > 12: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 7.822 | 1.0 | 83 | 7.4737 | 0.0776 | 0.0 | 0.0775 | 0.0776 | 0.6348 | 0.6223 | 2.0 | 2 | 2 | 13.0 | 0.0 |
| 3.2859 | 2.0 | 166 | 6.6585 | 0.1063 | 0.0 | 0.1063 | 0.1063 | 0.6469 | 0.608 | 5.0026 | 6 | 5 | 12.0 | 0.0 |
| 3.0284 | 3.0 | 249 | 6.4761 | 0.116 | 0.0 | 0.116 | 0.1161 | 0.6479 | 0.6388 | 3.9974 | 4 | 3 | 14.0 | 0.0 |
| 2.9681 | 4.0 | 332 | 6.4592 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6071 | 0.5723 | 1.0 | 1 | 1 | 12.0 | 0.0 |
| 2.9377 | 5.0 | 415 | 6.4142 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6047 | 0.5681 | 1.0 | 1 | 1 | 12.0 | 0.0 |
| 2.9168 | 6.0 | 498 | 6.4049 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6049 | 0.5685 | 1.0 | 1 | 1 | 12.0 | 0.0 |
| 2.8964 | 7.0 | 581 | 6.3912 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6047 | 0.5681 | 1.0 | 1 | 1 | 12.0 | 0.0 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
monsterapi/opt1.3B_codeinstruct | monsterapi | 2023-09-21T11:23:26Z | 0 | 0 | peft | [
"peft",
"facebook-opt-1.3b",
"code",
"instruct",
"instruct-code",
"code-alpaca",
"alpaca-instruct",
"alpaca",
"opt-1.3b",
"dataset:sahil2801/CodeAlpaca-20k",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"region:us"
]
| null | 2023-05-06T03:17:16Z | ---
library_name: peft
tags:
- facebook-opt-1.3b
- code
- instruct
- instruct-code
- code-alpaca
- alpaca-instruct
- alpaca
- opt-1.3b
datasets:
- sahil2801/CodeAlpaca-20k
base_model: codellama/CodeLlama-7b-hf
---
We finetuned Facebook/OPT-1.3B on Code-Alpaca-Instruct Dataset (sahil2801/CodeAlpaca-20k) for 5 epochs using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
This dataset is HuggingFaceH4/CodeAlpaca_20K unfiltered, removing 36 instances of blatant alignment.
The finetuning session got completed in 1 hour and 30 minutes and costed us only `$6` for the entire finetuning run!
#### Hyperparameters & Run details:
- Model Path: facebook/opt-1.3b
- Dataset: sahil2801/CodeAlpaca-20k
- Learning rate: 0.0003
- Number of epochs: 5
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1
---
license: apache-2.0
---
|
monsterapi/opt125M_alpaca | monsterapi | 2023-09-21T11:23:21Z | 146 | 0 | peft | [
"peft",
"facebook/opt-125m",
"code",
"instruct",
"alpaca-instruct",
"alpaca",
"dataset:tatsu-lab/alpaca",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"region:us"
]
| null | 2023-05-13T05:38:51Z | ---
library_name: peft
tags:
- facebook/opt-125m
- code
- instruct
- alpaca-instruct
- alpaca
datasets:
- tatsu-lab/alpaca
base_model: facebook/opt-125m
---
We finetuned facebook/opt-125m on tatsu-lab/alpaca Dataset for 10 epochs using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
This dataset is HuggingFaceH4/tatsu-lab/alpaca unfiltered, removing 36 instances of blatant alignment.
The finetuning session got completed in 40 minutes and costed us only `$4` for the entire finetuning run!
#### Hyperparameters & Run details:
- Model: facebook/opt-125m
- Dataset: tatsu-lab/alpaca
- Learning rate: 0.0003
- Number of epochs: 10
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1
-
---
license: apache-2.0
---
|
monsterapi/OpenPlatypus_LLAMA2_7b | monsterapi | 2023-09-21T11:23:18Z | 6 | 1 | peft | [
"peft",
"meta-llama/Llama-2-7b-hf",
"code",
"instruct",
"instruct-code",
"logical-reasoning",
"Platypus2",
"dataset:garage-bAInd/Open-Platypus",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
]
| null | 2023-09-05T10:13:05Z | ---
library_name: peft
tags:
- meta-llama/Llama-2-7b-hf
- code
- instruct
- instruct-code
- logical-reasoning
- Platypus2
datasets:
- garage-bAInd/Open-Platypus
base_model: meta-llama/Llama-2-7b-hf
---
We finetuned Meta-Llama/Llama-2-7b-hf on the Open-Platypus dataset (garage-bAInd/Open-Platypus) for 5 epochs using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
#### About OpenPlatypus Dataset
OpenPlatypus is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. The dataset is comprised of various sub-datasets, including PRM800K, ScienceQA, SciBench, ReClor, TheoremQA, among others. These were filtered using keyword search and Sentence Transformers to remove questions with a similarity above 80%. The dataset includes contributions under various licenses like MIT, Creative Commons, and Apache 2.0.
The finetuning session got completed in 1 hour and 30 minutes and costed us only `$15` for the entire finetuning run!
#### Hyperparameters & Run details:
- Model Path: meta-llama/Llama-2-7b-hf
- Dataset: garage-bAInd/Open-Platypus
- Learning rate: 0.0002
- Number of epochs: 5
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1
---
license: apache-2.0
---
|
alexalbala/llam2test | alexalbala | 2023-09-21T11:23:16Z | 0 | 0 | peft | [
"peft",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
]
| null | 2023-09-21T08:49:01Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
monsterapi/OpenPlatypus_Falcon_7b | monsterapi | 2023-09-21T11:23:15Z | 2 | 0 | peft | [
"peft",
"tiiuae/falcon-7b",
"code",
"instruct",
"instruct-code",
"logical-reasoning",
"Platypus2",
"dataset:garage-bAInd/Open-Platypus",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"region:us"
]
| null | 2023-09-05T11:28:00Z | ---
library_name: peft
tags:
- tiiuae/falcon-7b
- code
- instruct
- instruct-code
- logical-reasoning
- Platypus2
datasets:
- garage-bAInd/Open-Platypus
base_model: codellama/CodeLlama-7b-hf
---
We finetuned TIIUAE/Falcon-7B on the Open-Platypus dataset (garage-bAInd/Open-Platypus) for 3 epochs using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
#### About OpenPlatypus Dataset
OpenPlatypus is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. The dataset is comprised of various sub-datasets, including PRM800K, ScienceQA, SciBench, ReClor, TheoremQA, among others. These were filtered using keyword search and Sentence Transformers to remove questions with a similarity above 80%. The dataset includes contributions under various licenses like MIT, Creative Commons, and Apache 2.0.
The finetuning session got completed in ~ 3 hrs and costed us only `$14` for the entire finetuning run!
#### Hyperparameters & Run details:
- Model Path: tiiuae/falcon-7b
- Dataset: garage-bAInd/Open-Platypus
- Learning rate: 0.0003
- Number of epochs: 3
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1
---
license: apache-2.0
---
|
Subsets and Splits