modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 12:29:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 12:28:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Sumayan/Sumayan | Sumayan | 2023-07-14T04:11:08Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-07-14T04:11:08Z | ---
license: creativeml-openrail-m
---
|
keehun/textual_inversion_all | keehun | 2023-07-14T04:05:02Z | 6 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-14T02:47:03Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - keehun/textual_inversion_all
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
PraveenJesu/openai-whisper-medium-murf-audio-augment | PraveenJesu | 2023-07-14T04:04:59Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-14T04:04:56Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
BrendaScar/Taxi-v3-Qtable | BrendaScar | 2023-07-14T04:03:52Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-14T04:03:50Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-Qtable
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="BrendaScar/Taxi-v3-Qtable", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
enddl22/dreambooth_tutorial | enddl22 | 2023-07-14T04:02:00Z | 3 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-14T02:42:35Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - enddl22/dreambooth_tutorial
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
fnlp/moss-rlhf-sft-model-7B-en | fnlp | 2023-07-14T03:28:15Z | 0 | 2 | null | [
"llm",
"moss",
"rlhf",
"sft",
"zh",
"arxiv:2307.04964",
"license:agpl-3.0",
"region:us"
]
| null | 2023-07-13T08:03:11Z | ---
license: agpl-3.0
language:
- zh
tags:
- llm
- moss
- rlhf
- sft
---
# MOSS-RLHF
### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>👉 <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://openlmlab.github.io/MOSS-RLHF/" target="_blank">[Home page]*
## 🌟 News
### 👉 Wed, 12. July 2023. We have released Chinese reward model based OpenChineseLlama-7B!
[moss-rlhf-reward-model-7B-zh](https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main)
<br>
### 👉 Thu, 13. July 2023. We have released English reward model and SFT model based Llama-7B!
[moss-rlhf-reward-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-reward-model-7B-en)
[moss-rlhf-sft-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-sft-model-7B-en)
<br>
## 🧾 Open-source List
- [x] Open source code for RL training in large language models.
- [x] A 7B Chinese reward model based on openChineseLlama.
- [x] A 7B English reward model based on Llama-7B.
- [x] SFT model for English.
- [ ] Policy model for English after RLHF.
- ...
## 🌠 Introduction
Due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle.
In this technical report, we intend to help researchers to train their models stably with human feedback.
Contributions are summarized as follows:
1) We release competitive Chinese and English reward models, respectively, which have good cross-model generalization ability, alleviating the cost of relabeling human preference data;
2) We conduct in-depth analysis on the inner workings of PPO algorithm and propose the PPO-max algorithm to ensure stable model training;
3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans.
## 🔩 Requirements & Setup
This repository works on Python 3.8 and PyTorch 1.13.1.
We recommend using the **conda** virtual environment to run the code.
#### Step 1: Create a new Python virtual environment
```bash
conda update conda -n base -c defaults
conda create -n rlhf python=3.8
conda activate rlhf
```
#### Step 2: Install PyTorch and TensorBoard
```bash
conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia
```
#### Step 3: Install the remaining dependencies
```bash
conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge
pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels
apt install libaio-dev
DS_BUILD_OPS=1 pip install deepspeed
```
## ✨ Start training your own model!
Run code in a few steps.
### Step 1: Recover Reward model weights
We can not directly release the full weight of the reward model because of protocol restrictions.
You can merge the diff weight with original Llama-7B to recover the reward model we used.
We upload the diff models, thanks to tatsu-lab, you can recover the reward model follow these steps:
```bash
1) Download the weight diff into your local machine. The weight diff is located at:
# For English:
TODO
# For Chinese:
https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main
2) Merge the weight diff with the original Llama-7B:
# For English:
# Reward model
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-en/diff --path_tuned ./models/moss-rlhf-reward-model-7B-en/recover --model_type reward
# SFT model
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-sft-model-7B-en/diff --path_tuned ./models/moss-rlhf-sft-model-7B-en/recover --model_type sft
# Policy model
TODO
# For Chinese:
python merge_weight_zh.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-zh/diff --path_tuned ./models/moss-rlhf-reward-model-7B-zh/recover
```
### Step 2: Select your own SFT model.
Because of some limitations, we can not release the **Chinese** SFT model (Currently).
You can use your own SFT model, or a strong base model instead of our SFT model.
### Step 3: Start training
Run the command below.
```
# For Chinese:
# You need to use your own sft model currently.
bash run_zh.sh
# For English:
# We have loaded the sft model and reward model to huggingface.
bash run_en.sh
```
## Citation
```bibtex
@article{zheng2023secrets,
title={Secrets of RLHF in Large Language Models Part I: PPO},
author={Rui Zheng and Shihan Dou and Songyang Gao and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Limao Xiong and Lu Chen and Zhiheng Xi and Yuhao Zhou and Nuo Xu and Wenbin Lai and Minghao Zhu and Rongxiang Weng and Wensen Cheng and Cheng Chang and Zhangyue Yin and Yuan Hua and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang},
year={2023},
eprint={2307.04964},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
11old/11 | 11old | 2023-07-14T03:16:29Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2023-07-14T03:16:29Z | ---
license: bigscience-bloom-rail-1.0
---
|
justairr/BetterPrompt | justairr | 2023-07-14T03:11:36Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-07-13T12:37:51Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
A Prompt Generator and Enhancer based on ChatGLM
|
Diandiandian/path-to-save-model | Diandiandian | 2023-07-14T03:10:13Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-11T10:16:14Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Diandiandian/path-to-save-model
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
junejae/distilbert-base-uncased-finetuned-emotion | junejae | 2023-07-14T02:57:50Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-14T02:02:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.918
- name: F1
type: f1
value: 0.9183451843024099
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2300
- Accuracy: 0.918
- F1: 0.9183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8386 | 1.0 | 250 | 0.3276 | 0.904 | 0.9011 |
| 0.2572 | 2.0 | 500 | 0.2300 | 0.918 | 0.9183 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YanJiangJerry/covid-augment-tweet-bert-large-e2-version2-noweight | YanJiangJerry | 2023-07-14T02:54:32Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-14T02:43:58Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: covid-augment-tweet-bert-large-e2-version2-noweight
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-augment-tweet-bert-large-e2-version2-noweight
This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2199
- Accuracy: 0.9607
- F1: 0.8954
- Precision: 0.8878
- Recall: 0.9031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0809 | 1.0 | 1023 | 0.2318 | 0.9536 | 0.8831 | 0.8318 | 0.9412 |
| 0.0415 | 2.0 | 2046 | 0.2199 | 0.9607 | 0.8954 | 0.8878 | 0.9031 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YanJiangJerry/covid-augment-tweet-bert-large-e4-version2 | YanJiangJerry | 2023-07-14T02:41:59Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-14T02:20:32Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: covid-augment-tweet-bert-large-e4-version2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-augment-tweet-bert-large-e4-version2
This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3591
- Accuracy: 0.9568
- F1: 0.8859
- Precision: 0.8725
- Recall: 0.8997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0732 | 1.0 | 1023 | 0.2216 | 0.9613 | 0.8986 | 0.8779 | 0.9204 |
| 0.0216 | 2.0 | 2046 | 0.2441 | 0.9600 | 0.8977 | 0.8580 | 0.9412 |
| 0.0065 | 3.0 | 3069 | 0.3901 | 0.9497 | 0.8730 | 0.8246 | 0.9273 |
| 0.0032 | 4.0 | 4092 | 0.3591 | 0.9568 | 0.8859 | 0.8725 | 0.8997 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-GPTQ | bhenrym14 | 2023-07-14T02:35:18Z | 9 | 5 | transformers | [
"transformers",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"arxiv:2306.15595",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-14T01:35:39Z | ---
datasets:
- jondurbin/airoboros-gpt4-1.4.1
---
Mostly untested. Feedback welcome!
Find fp16 quantized weights here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-fp16
# RoPE Scaled QLoRA Fine-tune of Llama-33b on airoboros-gpt4-1.4.1, with Long Context Pretraining (GPTQ)
## Overview
This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4) (GPTQ 4bit quantized) with several key modifications:
- Context length extended to 16384 by RoPE Scaled Embeddings.
- The Llama-33b base model is pretrained for additional 100 steps on 8192 length sequences from the pile dataset.
- Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
**This is a QLoRA fine-tune**
Pretraining took 10 hours. Fine-tuning took ~41 hours on 1x RTX 6000 Ada.
## How to Use
The easiest way is to use the GPTQ weights (linked above) with [oobabooga text-generation-webui](https://github.com/oobabooga/text-generation-webui) and ExLlama. You'll need to set max_seq_len to 16384 and compress_pos_emb to 8.
**IMPORTANT: To use these weights with autoGPTQ or GPTQ-for-LLama you'll need to patch in the appropriate RoPE scaling module. see: [replace_llama_rope_with_scaled_rope](https://github.com/bhenrym14/qlora-airoboros-longcontext/blob/main/scaledllama/llama_rope_scaled_monkey_patch-16k.py)**
I have had issues with going beyond 8192 tokens with exllama. I have not tested that with this model. YMMV
## Motivation
Recent advancements in extending context by RoPE scaling ([kaiokendev](https://kaiokendev.github.io/til#extending-context-to-8k) and [meta AI)](https://arxiv.org/abs/2306.15595)) demonstrate the ability to extend the context window without (total) retraining. My prior experiments have found the following:
- An adapter finetuned with the scaled embeddings, applied to a base model other than the one upon which it was trained, brings a significant performance penalty at all context lengths. ([airoboros-13b-gpt4-1.4.1-PI-8192](https://huggingface.co/bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-GPTQ)).
- Pretraining on sequences equal in length to the maximum given by the scaling factor improves performance considerably. This is most notable at the longest contexts lengths. In fact, for the 7b model it was necessary to achieve decreasing perplexity beyond 8k tokens for the (see [airoboros-7b-gpt4-1.4.1-lxctx-PI-16384](https://huggingface.co/bhenrym14/airoboros-7b-gpt4-1.4.1-lxctx-PI-16384-fp16)).
This model applies the pretraining methodology at 8192 sequence length, but uses a scaling factor of 8, giving a theoretical max context of 16384. Unlike for the 7b model, I did not pretrain at 16384 due to memory constraints. How will this model perform at contexts >8k? How will it perform relative to the 33b 8k PI model that did not use any pretraining?
## Relative Performance (perplexity)
| Context (tokens) | bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-fp16 (this model) | bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16 | TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ | jondurbin/airoboros-33B-gpt4-1.4-GPTQ |
| ---| ------- | ------ | --- | --- |
| 512 | 7.90 | 9.84 | 8.24 | **6.36** |
| 1024 | 6.17 | 7.73 | 8.06 | **5.12** |
| 2048 | 5.23 | 6.62 | 7.02 | **4.43** |
| 4096 | **4.91** | 6.25 | 6.56 | 54.5 |
If I manage to get longer context perplexities, I'll post them here.
- Despite the larger scaling factor, this model outperforms the original 8k PI model at all tested context lengths. This is almost certainly due to the long context pretraining.
- As demonstrated with previous experiments SuperHOT underperforms these finetunes at all tested context lengths.
- For contexts shorter than the original 2048, the original model has lower perplexity. This is consistent with the literature.
- This comparison isn't perfect. I did use the 1.4.1 dataset. There are other potentially influential variables responsible for these performance differences. Perplexity can only tell part of the story.
Whether perplexity continues to decrease between 8k and 16k, I am not certain. I don't have the VRAM to test this atm.
## Quantization:
The merged model was quantized with AutoGPTQ (bits = 4, group_size = 64, desc_act = True).
## Prompting:
See original model card below.
# Original model card: Jon Durbin's Airoboros 33B GPT4 1.4
__not yet tested!__
## Overview
This is a qlora fine-tune 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of the previous gpt-4 series, with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora)
The prompt it was trained with was:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-33b-gpt4-1.4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a newline and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js.
PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially. |
spensercai/DeOldify | spensercai | 2023-07-14T02:35:02Z | 0 | 11 | null | [
"license:mit",
"region:us"
]
| null | 2023-07-14T02:28:29Z | ---
license: mit
---
# DeOldify's Completed Generator Weights
github: https://github.com/jantic/DeOldify |
sumandeng/my-m3e-base | sumandeng | 2023-07-14T02:29:36Z | 15 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"embedding",
"text-embedding",
"zh",
"en",
"region:us"
]
| null | 2023-08-02T13:27:13Z | ---
language:
- zh
- en
tags:
- embedding
- text-embedding
library_name: sentence-transformers
---
# 🅜 M3E Models
[m3e-small](https://huggingface.co/moka-ai/m3e-small) | [m3e-base](https://huggingface.co/moka-ai/m3e-base)
M3E 是 Moka Massive Mixed Embedding 的缩写
- Moka,此模型由 MokaAI 训练,开源和评测,训练脚本使用 [uniem](https://github.com/wangyuxinwhy/uniem/blob/main/scripts/train_m3e.py) ,评测 BenchMark 使用 [MTEB-zh](https://github.com/wangyuxinwhy/uniem/tree/main/mteb-zh)
- Massive,此模型通过**千万级** (2200w+) 的中文句对数据集进行训练
- Mixed,此模型支持中英双语的同质文本相似度计算,异质文本检索等功能,未来还会支持代码检索
- Embedding,此模型是文本嵌入模型,可以将自然语言转换成稠密的向量
## 🆕 更新说明
- 2023.06.24,添加微调 M3E 的教程 [notebook](https://github.com/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb),几行代码,更佳适配!<a target="_blank" href="https://colab.research.google.com/github/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
- 2023.06.14,添加了三个中文开源文本嵌入模型到评测中,包括 UER, ErLangShen, DMetaSoul
- 2023.06.08,添加检索任务的评测结果,在 T2Ranking 1W 中文数据集上,m3e-base 在 ndcg@10 上达到了 0.8004,超过了 openai-ada-002 的 0.7786
- 2023.06.07,添加文本分类任务的评测结果,在 6 种文本分类数据集上,m3e-base 在 accuracy 上达到了 0.6157,超过了 openai-ada-002 的 0.5956
## ⚖️ 模型对比
| | 参数数量 | 维度 | 中文 | 英文 | s2s | s2p | s2c | 开源 | 兼容性 | s2s Acc | s2p ndcg@10 |
| --------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | ---- | ---------- | ------------ | -------- |
| m3e-small | 24M | 512 | 是 | 否 | 是 | 否 | 否 | 是 | 优 | 0.5834 | 0.7262 |
| m3e-base | 110M | 768 | 是 | 是 | 是 | 是 | 否 | 是 | 优 | **0.6157** | **0.8004** |
| text2vec | 110M | 768 | 是 | 否 | 是 | 否 | 否 | 是 | 优 | 0.5755 | 0.6346 |
| openai-ada-002 | 未知 | 1536 | 是 | 是 | 是 | 是 | 是 | 否 | 优 | 0.5956 | 0.7786 |
说明:
- s2s, 即 sentence to sentence ,代表了同质文本之间的嵌入能力,适用任务:文本相似度,重复问题检测,文本分类等
- s2p, 即 sentence to passage ,代表了异质文本之间的嵌入能力,适用任务:文本检索,GPT 记忆模块等
- s2c, 即 sentence to code ,代表了自然语言和程序语言之间的嵌入能力,适用任务:代码检索
- 兼容性,代表了模型在开源社区中各种项目被支持的程度,由于 m3e 和 text2vec 都可以直接通过 sentence-transformers 直接使用,所以和 openai 在社区的支持度上相当
- ACC & ndcg@10,详情见下方的评测
Tips:
- 使用场景主要是中文,少量英文的情况,建议使用 m3e 系列的模型
- 多语言使用场景,并且不介意数据隐私的话,我建议使用 openai text-embedding-ada-002
- 代码检索场景,推荐使用 openai text-embedding-ada-002
- 文本检索场景,请使用具备文本检索能力的模型,只在 S2S 上训练的文本嵌入模型,没有办法完成文本检索任务
## 🔧 使用 M3E
您需要先安装 sentence-transformers
```bash
pip install -U sentence-transformers
```
安装完成后,您可以使用以下代码来使用 M3E Models
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('moka-ai/m3e-base')
#Our sentences we like to encode
sentences = [
'* Moka 此文本嵌入模型由 MokaAI 训练并开源,训练脚本使用 uniem',
'* Massive 此文本嵌入模型通过**千万级**的中文句对数据集进行训练',
'* Mixed 此文本嵌入模型支持中英双语的同质文本相似度计算,异质文本检索等功能,未来还会支持代码检索,ALL in one'
]
#Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
#Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding)
print("")
```
M3E 系列的所有模型在设计的时候就考虑到完全兼容 [sentence-transformers](https://www.sbert.net/) ,所以你可以通过**替换名称字符串**的方式在所有支持 sentence-transformers 的项目中**无缝**使用 M3E Models,比如 [chroma](https://docs.trychroma.com/getting-started), [guidance](https://github.com/microsoft/guidance), [semantic-kernel](https://github.com/microsoft/semantic-kernel) 。
## 🎨 微调模型
`uniem` 提供了非常易用的 finetune 接口,几行代码,即刻适配!
```python
from datasets import load_dataset
from uniem.finetuner import FineTuner
dataset = load_dataset('shibing624/nli_zh', 'STS-B')
# 指定训练的模型为 m3e-small
finetuner = FineTuner.from_pretrained('moka-ai/m3e-small', dataset=dataset)
finetuner.run(epochs=1)
```
详见 [uniem 微调教程](https://github.com/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb)
<a target="_blank" href="https://colab.research.google.com/github/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
## ➿ 训练方案
M3E 使用 in-batch 负采样的对比学习的方式在句对数据集进行训练,为了保证 in-batch 负采样的效果,我们使用 A100 80G 来最大化 batch-size,并在共计 2200W+ 的句对数据集上训练了 1 epoch。训练脚本使用 [uniem](https://github.com/wangyuxinwhy/uniem/blob/main/scripts/train_m3e.py),您可以在这里查看具体细节。
## 🌟 特性
- 中文训练集,M3E 在大规模句对数据集上的训练,包含中文百科,金融,医疗,法律,新闻,学术等多个领域共计 2200W 句对样本,数据集详见 [M3E 数据集](#M3E数据集)
- 英文训练集,M3E 使用 MEDI 145W 英文三元组数据集进行训练,数据集详见 [MEDI 数据集](https://drive.google.com/file/d/1vZ5c2oJNonGOvXzppNg5mHz24O6jcc52/view),此数据集由 [instructor team](https://github.com/HKUNLP/instructor-embedding) 提供
- 指令数据集,M3E 使用了 300W + 的指令微调数据集,这使得 M3E 对文本编码的时候可以遵从指令,这部分的工作主要被启发于 [instructor-embedding](https://github.com/HKUNLP/instructor-embedding)
- 基础模型,M3E 使用 hfl 实验室的 [Roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext) 系列模型进行训练,目前提供 small 和 base 两个版本,大家则需选用
- ALL IN ONE,M3E 旨在提供一个 ALL IN ONE 的文本嵌入模型,不仅支持同质句子相似度判断,还支持异质文本检索,你只需要一个模型就可以覆盖全部的应用场景,未来还会支持代码检索
## 💯 MTEB-zh 评测
- 评测模型,[text2vec](https://github.com/shibing624/text2vec), m3e-base, m3e-small, openai text-embedding-ada-002, [DMetaSoul](https://huggingface.co/DMetaSoul/sbert-chinese-general-v2), [UER](https://huggingface.co/uer/sbert-base-chinese-nli), [ErLangShen](https://huggingface.co/IDEA-CCNL/Erlangshen-SimCSE-110M-Chinese)
- 评测脚本,具体参考 [MTEB-zh] (https://github.com/wangyuxinwhy/uniem/blob/main/mteb-zh)
### 文本分类
- 数据集选择,选择开源在 HuggingFace 上的 6 种文本分类数据集,包括新闻、电商评论、股票评论、长文本等
- 评测方式,使用 MTEB 的方式进行评测,报告 Accuracy。
| | text2vec | m3e-small | m3e-base | openai | DMetaSoul | uer | erlangshen |
| ----------------- | -------- | --------- | -------- | ------ | ----------- | ------- | ----------- |
| TNews | 0.43 | 0.4443 | **0.4827** | 0.4594 | 0.3084 | 0.3539 | 0.4361 |
| JDIphone | 0.8214 | 0.8293 | **0.8533** | 0.746 | 0.7972 | 0.8283 | 0.8356 |
| GubaEastmony | 0.7472 | 0.712 | 0.7621 | 0.7574 | 0.735 | 0.7534 | **0.7787** |
| TYQSentiment | 0.6099 | 0.6596 | **0.7188** | 0.68 | 0.6437 | 0.6662 | 0.6444 |
| StockComSentiment | 0.4307 | 0.4291 | 0.4363 | **0.4819** | 0.4309 | 0.4555 | 0.4482 |
| IFlyTek | 0.414 | 0.4263 | 0.4409 | **0.4486** | 0.3969 | 0.3762 | 0.4241 |
| Average | 0.5755 | 0.5834 | **0.6157** | 0.5956 | 0.552016667 | 0.57225 | 0.594516667 |
### 检索排序
#### T2Ranking 1W
- 数据集选择,使用 [T2Ranking](https://github.com/THUIR/T2Ranking/tree/main) 数据集,由于 T2Ranking 的数据集太大,openai 评测起来的时间成本和 api 费用有些高,所以我们只选择了 T2Ranking 中的前 10000 篇文章
- 评测方式,使用 MTEB 的方式进行评测,报告 map@1, map@10, mrr@1, mrr@10, ndcg@1, ndcg@10
- 注意!从实验结果和训练方式来看,除了 M3E 模型和 openai 模型外,其余模型都没有做检索任务的训练,所以结果仅供参考。
| | text2vec | openai-ada-002 | m3e-small | m3e-base | DMetaSoul | uer | erlangshen |
| ------- | -------- | -------------- | --------- | -------- | --------- | ------- | ---------- |
| map@1 | 0.4684 | 0.6133 | 0.5574 | **0.626** | 0.25203 | 0.08647 | 0.25394 |
| map@10 | 0.5877 | 0.7423 | 0.6878 | **0.7656** | 0.33312 | 0.13008 | 0.34714 |
| mrr@1 | 0.5345 | 0.6931 | 0.6324 | **0.7047** | 0.29258 | 0.10067 | 0.29447 |
| mrr@10 | 0.6217 | 0.7668 | 0.712 | **0.7841** | 0.36287 | 0.14516 | 0.3751 |
| ndcg@1 | 0.5207 | 0.6764 | 0.6159 | **0.6881** | 0.28358 | 0.09748 | 0.28578 |
| ndcg@10 | 0.6346 | 0.7786 | 0.7262 | **0.8004** | 0.37468 | 0.15783 | 0.39329 |
#### T2Ranking
- 数据集选择,使用 T2Ranking,刨除 openai-ada-002 模型后,我们对剩余的三个模型,进行 T2Ranking 10W 和 T2Ranking 50W 的评测。(T2Ranking 评测太耗内存了... 128G 都不行)
- 评测方式,使用 MTEB 的方式进行评测,报告 ndcg@10
| | text2vec | m3e-small | m3e-base |
| ------- | -------- | --------- | -------- |
| t2r-1w | 0.6346 | 0.72621 | **0.8004** |
| t2r-10w | 0.44644 | 0.5251 | **0.6263** |
| t2r-50w | 0.33482 | 0.38626 | **0.47364** |
说明:
- 检索排序对于 text2vec 并不公平,因为 text2vec 在训练的时候没有使用过检索相关的数据集,所以没有办法很好的完成检索任务也是正常的。
## 📂 M3E数据集
如果您想要使用这些数据集,你可以在 [uniem process_zh_datasets](https://github.com/wangyuxinwhy/uniem/blob/main/scripts/process_zh_datasets.py) 中找到加载 huggingface 数据集的脚本,非 huggingface 数据集需要您根据下方提供的链接自行下载和处理。
| 数据集名称 | 领域 | 数量 | 任务类型 | Prompt | 质量 | 数据提供者 | 说明 | 是否开源/研究使用 | 是否商用 | 脚本 | Done | URL | 是否同质 |
| -------------------- | ---- | --------- | ----------------- | ------ | ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----------------- | -------- | ---- | ---- | ------------------------------------------------------------ | -------- |
| cmrc2018 | 百科 | 14,363 | 问答 | 问答 | 优 | Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu | https://github.com/ymcui/cmrc2018/blob/master/README_CN.md 专家标注的基于维基百科的中文阅读理解数据集,将问题和上下文视为正例 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/cmrc2018 | 否 |
| belle_2m | 百科 | 2,000,000 | 指令微调 | 无 | 优 | LianjiaTech/BELLE | belle 的指令微调数据集,使用 self instruct 方法基于 gpt3.5 生成 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/BelleGroup/train_2M_CN | 否 |
| firefily | 百科 | 1,649,399 | 指令微调 | 无 | 优 | YeungNLP | Firefly(流萤) 是一个开源的中文对话式大语言模型,使用指令微调(Instruction Tuning)在中文数据集上进行调优。使用了词表裁剪、ZeRO等技术,有效降低显存消耗和提高训练效率。 在训练中,我们使用了更小的模型参数量,以及更少的计算资源。 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M | 否 |
| alpaca_gpt4 | 百科 | 48,818 | 指令微调 | 无 | 优 | Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao | 本数据集是参考Alpaca方法基于GPT4得到的self-instruct数据,约5万条。 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/alpaca-zh | 否 |
| zhihu_kol | 百科 | 1,006,218 | 问答 | 问答 | 优 | wangrui6 | 知乎问答 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/wangrui6/Zhihu-KOL | 否 |
| hc3_chinese | 百科 | 39,781 | 问答 | 问答 | 良 | Hello-SimpleAI | 问答数据,包括人工回答和 GPT 回答 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/Hello-SimpleAI/HC3-Chinese | 否 |
| amazon_reviews_multi | 电商 | 210,000 | 问答 文本分类 | 摘要 | 优 | 亚马逊 | 亚马逊产品评论数据集 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/amazon_reviews_multi/viewer/zh/train?row=8 | 否 |
| mlqa | 百科 | 85,853 | 问答 | 问答 | 良 | patrickvonplaten | 一个用于评估跨语言问答性能的基准数据集 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/mlqa/viewer/mlqa-translate-train.zh/train?p=2 | 否 |
| xlsum | 新闻 | 93,404 | 摘要 | 摘要 | 良 | BUET CSE NLP Group | BBC的专业注释文章摘要对 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/chinese_simplified/train?row=259 | 否 |
| ocnli | 口语 | 17,726 | 自然语言推理 | 推理 | 良 | Thomas Wolf | 自然语言推理数据集 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/clue/viewer/ocnli | 是 |
| BQ | 金融 | 60,000 | 文本分类 | 相似 | 良 | Intelligent Computing Research Center, Harbin Institute of Technology(Shenzhen) | http://icrc.hitsz.edu.cn/info/1037/1162.htm BQ 语料库包含来自网上银行自定义服务日志的 120,000 个问题对。它分为三部分:100,000 对用于训练,10,000 对用于验证,10,000 对用于测试。 数据提供者: 哈尔滨工业大学(深圳)智能计算研究中心 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/nli_zh/viewer/BQ | 是 |
| lcqmc | 口语 | 149,226 | 文本分类 | 相似 | 良 | Ming Xu | 哈工大文本匹配数据集,LCQMC 是哈尔滨工业大学在自然语言处理国际顶会 COLING2018 构建的问题语义匹配数据集,其目标是判断两个问题的语义是否相同 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/nli_zh/viewer/LCQMC/train | 是 |
| paws-x | 百科 | 23,576 | 文本分类 | 相似 | 优 | Bhavitvya Malik | PAWS Wiki中的示例 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/paws-x/viewer/zh/train | 是 |
| wiki_atomic_edit | 百科 | 1,213,780 | 平行语义 | 相似 | 优 | abhishek thakur | 基于中文维基百科的编辑记录收集的数据集 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/wiki_atomic_edits | 是 |
| chatmed_consult | 医药 | 549,326 | 问答 | 问答 | 优 | Wei Zhu | 真实世界的医学相关的问题,使用 gpt3.5 进行回答 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/michaelwzhu/ChatMed_Consult_Dataset | 否 |
| webqa | 百科 | 42,216 | 问答 | 问答 | 优 | suolyer | 百度于2016年开源的数据集,数据来自于百度知道;格式为一个问题多篇意思基本一致的文章,分为人为标注以及浏览器检索;数据整体质量中,因为混合了很多检索而来的文章 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/suolyer/webqa/viewer/suolyer--webqa/train?p=3 | 否 |
| dureader_robust | 百科 | 65,937 | 机器阅读理解 问答 | 问答 | 优 | 百度 | DuReader robust旨在利用真实应用中的数据样本来衡量阅读理解模型的鲁棒性,评测模型的过敏感性、过稳定性以及泛化能力,是首个中文阅读理解鲁棒性数据集。 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/PaddlePaddle/dureader_robust/viewer/plain_text/train?row=96 | 否 |
| csl | 学术 | 395,927 | 语料 | 摘要 | 优 | Yudong Li, Yuqing Zhang, Zhe Zhao, Linlin Shen, Weijie Liu, Weiquan Mao and Hui Zhang | 提供首个中文科学文献数据集(CSL),包含 396,209 篇中文核心期刊论文元信息 (标题、摘要、关键词、学科、门类)。CSL 数据集可以作为预训练语料,也可以构建许多NLP任务,例如文本摘要(标题预测)、 关键词生成和文本分类等。 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/neuclir/csl | 否 |
| miracl-corpus | 百科 | 4,934,368 | 语料 | 摘要 | 优 | MIRACL | The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., \n\n in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/miracl/miracl-corpus | 否 |
| lawzhidao | 法律 | 36,368 | 问答 | 问答 | 优 | 和鲸社区-Ustinian | 百度知道清洗后的法律问答 | 是 | 是 | 否 | 是 | https://www.heywhale.com/mw/dataset/5e953ca8e7ec38002d02fca7/content | 否 |
| CINLID | 成语 | 34,746 | 平行语义 | 相似 | 优 | 高长宽 | 中文成语语义推理数据集(Chinese Idioms Natural Language Inference Dataset)收集了106832条由人工撰写的成语对(含少量歇后语、俗语等短文本),通过人工标注的方式进行平衡分类,标签为entailment、contradiction和neutral,支持自然语言推理(NLI)的任务。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=39 | 是 |
| DuSQL | SQL | 25,003 | NL2SQL | SQL | 优 | 百度 | DuSQL是一个面向实际应用的数据集,包含200个数据库,覆盖了164个领域,问题覆盖了匹配、计算、推理等实际应用中常见形式。该数据集更贴近真实应用场景,要求模型领域无关、问题无关,且具备计算推理等能力。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=13 | 否 |
| Zhuiyi-NL2SQL | SQL | 45,918 | NL2SQL | SQL | 优 | 追一科技 刘云峰 | NL2SQL是一个多领域的简单数据集,其主要包含匹配类型问题。该数据集主要验证模型的泛化能力,其要求模型具有较强的领域泛化能力、问题泛化能力。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=12 | 否 |
| Cspider | SQL | 7,785 | NL2SQL | SQL | 优 | 西湖大学 张岳 | CSpider是一个多语言数据集,其问题以中文表达,数据库以英文存储,这种双语模式在实际应用中也非常常见,尤其是数据库引擎对中文支持不好的情况下。该数据集要求模型领域无关、问题无关,且能够实现多语言匹配。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=11 | 否 |
| news2016zh | 新闻 | 2,507,549 | 语料 | 摘要 | 良 | Bright Xu | 包含了250万篇新闻。新闻来源涵盖了6.3万个媒体,含标题、关键词、描述、正文。 | 是 | 是 | 否 | 是 | https://github.com/brightmart/nlp_chinese_corpus | 否 |
| baike2018qa | 百科 | 1,470,142 | 问答 | 问答 | 良 | Bright Xu | 含有150万个预先过滤过的、高质量问题和答案,每个问题属于一个类别。总共有492个类别,其中频率达到或超过10次的类别有434个。 | 是 | 是 | 否 | 是 | https://github.com/brightmart/nlp_chinese_corpus | 否 |
| webtext2019zh | 百科 | 4,258,310 | 问答 | 问答 | 优 | Bright Xu | 含有410万个预先过滤过的、高质量问题和回复。每个问题属于一个【话题】,总共有2.8万个各式话题,话题包罗万象。 | 是 | 是 | 否 | 是 | https://github.com/brightmart/nlp_chinese_corpus | 否 |
| SimCLUE | 百科 | 775,593 | 平行语义 | 相似 | 良 | 数据集合,请在 simCLUE 中查看 | 整合了中文领域绝大多数可用的开源的语义相似度和自然语言推理的数据集,并重新做了数据拆分和整理。 | 是 | 否 | 否 | 是 | https://github.com/CLUEbenchmark/SimCLUE | 是 |
| Chinese-SQuAD | 新闻 | 76,449 | 机器阅读理解 | 问答 | 优 | junzeng-pluto | 中文机器阅读理解数据集,通过机器翻译加人工校正的方式从原始Squad转换而来 | 是 | 否 | 否 | 是 | https://github.com/pluto-junzeng/ChineseSquad | 否 |
## 🗓️ 计划表
- [x] 完成 MTEB 中文评测 BenchMark, [MTEB-zh](https://github.com/wangyuxinwhy/uniem/tree/main/mteb-zh)
- [x] 完成 Large 模型的训练和开源
- [x] 完成 Finetuner ,允许更优雅的微调
- [ ] 完成支持代码检索的模型
- [ ] 对 M3E 数据集进行清洗,保留高质量的部分,组成 m3e-hq,并在 huggingface 上开源
- [ ] 在 m3e-hq 的数据集上补充 hard negative 的样本及相似度分数,组成 m3e-hq-with-score,并在 huggingface 上开源
- [ ] 在 m3e-hq-with-score 上通过 [cosent loss](https://github.com/wangyuxinwhy/uniem/blob/main/uniem/criteria.py#LL24C39-L24C39) loss 进行训练并开源模型,CoSent 原理参考这篇[博客](https://kexue.fm/archives/8847)
- [ ] 开源商用版本的 M3E models
## 🙏 致谢
感谢开源社区提供的中文语料,感谢所有在此工作中提供帮助的人们,希望中文社区越来越好,共勉!
## 📜 License
M3E models 使用的数据集中包括大量非商用的数据集,所以 M3E models 也是非商用的,仅供研究使用。不过我们已经在 M3E 数据集上标识了商用和非商用的数据集,您可以根据自己的需求自行训练。
## Citation
Please cite this model using the following format:
```
@software {Moka Massive Mixed Embedding,
author = {Wang Yuxin,Sun Qingxuan,He sicheng},
title = {M3E: Moka Massive Mixed Embedding Model},
year = {2023}
}
``` |
moka-ai/m3e-base | moka-ai | 2023-07-14T02:29:36Z | 254,010 | 912 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"embedding",
"text-embedding",
"zh",
"en",
"region:us"
]
| null | 2023-06-06T02:28:47Z | ---
language:
- zh
- en
tags:
- embedding
- text-embedding
library_name: sentence-transformers
---
# 🅜 M3E Models
[m3e-small](https://huggingface.co/moka-ai/m3e-small) | [m3e-base](https://huggingface.co/moka-ai/m3e-base)
M3E 是 Moka Massive Mixed Embedding 的缩写
- Moka,此模型由 MokaAI 训练,开源和评测,训练脚本使用 [uniem](https://github.com/wangyuxinwhy/uniem/blob/main/scripts/train_m3e.py) ,评测 BenchMark 使用 [MTEB-zh](https://github.com/wangyuxinwhy/uniem/tree/main/mteb-zh)
- Massive,此模型通过**千万级** (2200w+) 的中文句对数据集进行训练
- Mixed,此模型支持中英双语的同质文本相似度计算,异质文本检索等功能,未来还会支持代码检索
- Embedding,此模型是文本嵌入模型,可以将自然语言转换成稠密的向量
## 🆕 更新说明
- 2023.06.24,添加微调 M3E 的教程 [notebook](https://github.com/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb),几行代码,更佳适配!<a target="_blank" href="https://colab.research.google.com/github/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
- 2023.06.14,添加了三个中文开源文本嵌入模型到评测中,包括 UER, ErLangShen, DMetaSoul
- 2023.06.08,添加检索任务的评测结果,在 T2Ranking 1W 中文数据集上,m3e-base 在 ndcg@10 上达到了 0.8004,超过了 openai-ada-002 的 0.7786
- 2023.06.07,添加文本分类任务的评测结果,在 6 种文本分类数据集上,m3e-base 在 accuracy 上达到了 0.6157,超过了 openai-ada-002 的 0.5956
## ⚖️ 模型对比
| | 参数数量 | 维度 | 中文 | 英文 | s2s | s2p | s2c | 开源 | 兼容性 | s2s Acc | s2p ndcg@10 |
| --------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | ---- | ---------- | ------------ | -------- |
| m3e-small | 24M | 512 | 是 | 否 | 是 | 否 | 否 | 是 | 优 | 0.5834 | 0.7262 |
| m3e-base | 110M | 768 | 是 | 是 | 是 | 是 | 否 | 是 | 优 | **0.6157** | **0.8004** |
| text2vec | 110M | 768 | 是 | 否 | 是 | 否 | 否 | 是 | 优 | 0.5755 | 0.6346 |
| openai-ada-002 | 未知 | 1536 | 是 | 是 | 是 | 是 | 是 | 否 | 优 | 0.5956 | 0.7786 |
说明:
- s2s, 即 sentence to sentence ,代表了同质文本之间的嵌入能力,适用任务:文本相似度,重复问题检测,文本分类等
- s2p, 即 sentence to passage ,代表了异质文本之间的嵌入能力,适用任务:文本检索,GPT 记忆模块等
- s2c, 即 sentence to code ,代表了自然语言和程序语言之间的嵌入能力,适用任务:代码检索
- 兼容性,代表了模型在开源社区中各种项目被支持的程度,由于 m3e 和 text2vec 都可以直接通过 sentence-transformers 直接使用,所以和 openai 在社区的支持度上相当
- ACC & ndcg@10,详情见下方的评测
Tips:
- 使用场景主要是中文,少量英文的情况,建议使用 m3e 系列的模型
- 多语言使用场景,并且不介意数据隐私的话,我建议使用 openai text-embedding-ada-002
- 代码检索场景,推荐使用 openai text-embedding-ada-002
- 文本检索场景,请使用具备文本检索能力的模型,只在 S2S 上训练的文本嵌入模型,没有办法完成文本检索任务
## 🔧 使用 M3E
您需要先安装 sentence-transformers
```bash
pip install -U sentence-transformers
```
安装完成后,您可以使用以下代码来使用 M3E Models
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('moka-ai/m3e-base')
#Our sentences we like to encode
sentences = [
'* Moka 此文本嵌入模型由 MokaAI 训练并开源,训练脚本使用 uniem',
'* Massive 此文本嵌入模型通过**千万级**的中文句对数据集进行训练',
'* Mixed 此文本嵌入模型支持中英双语的同质文本相似度计算,异质文本检索等功能,未来还会支持代码检索,ALL in one'
]
#Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
#Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding)
print("")
```
M3E 系列的所有模型在设计的时候就考虑到完全兼容 [sentence-transformers](https://www.sbert.net/) ,所以你可以通过**替换名称字符串**的方式在所有支持 sentence-transformers 的项目中**无缝**使用 M3E Models,比如 [chroma](https://docs.trychroma.com/getting-started), [guidance](https://github.com/microsoft/guidance), [semantic-kernel](https://github.com/microsoft/semantic-kernel) 。
## 🎨 微调模型
`uniem` 提供了非常易用的 finetune 接口,几行代码,即刻适配!
```python
from datasets import load_dataset
from uniem.finetuner import FineTuner
dataset = load_dataset('shibing624/nli_zh', 'STS-B')
# 指定训练的模型为 m3e-small
finetuner = FineTuner.from_pretrained('moka-ai/m3e-small', dataset=dataset)
finetuner.run(epochs=1)
```
详见 [uniem 微调教程](https://github.com/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb)
<a target="_blank" href="https://colab.research.google.com/github/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
## ➿ 训练方案
M3E 使用 in-batch 负采样的对比学习的方式在句对数据集进行训练,为了保证 in-batch 负采样的效果,我们使用 A100 80G 来最大化 batch-size,并在共计 2200W+ 的句对数据集上训练了 1 epoch。训练脚本使用 [uniem](https://github.com/wangyuxinwhy/uniem/blob/main/scripts/train_m3e.py),您可以在这里查看具体细节。
## 🌟 特性
- 中文训练集,M3E 在大规模句对数据集上的训练,包含中文百科,金融,医疗,法律,新闻,学术等多个领域共计 2200W 句对样本,数据集详见 [M3E 数据集](#M3E数据集)
- 英文训练集,M3E 使用 MEDI 145W 英文三元组数据集进行训练,数据集详见 [MEDI 数据集](https://drive.google.com/file/d/1vZ5c2oJNonGOvXzppNg5mHz24O6jcc52/view),此数据集由 [instructor team](https://github.com/HKUNLP/instructor-embedding) 提供
- 指令数据集,M3E 使用了 300W + 的指令微调数据集,这使得 M3E 对文本编码的时候可以遵从指令,这部分的工作主要被启发于 [instructor-embedding](https://github.com/HKUNLP/instructor-embedding)
- 基础模型,M3E 使用 hfl 实验室的 [Roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext) 系列模型进行训练,目前提供 small 和 base 两个版本,大家则需选用
- ALL IN ONE,M3E 旨在提供一个 ALL IN ONE 的文本嵌入模型,不仅支持同质句子相似度判断,还支持异质文本检索,你只需要一个模型就可以覆盖全部的应用场景,未来还会支持代码检索
## 💯 MTEB-zh 评测
- 评测模型,[text2vec](https://github.com/shibing624/text2vec), m3e-base, m3e-small, openai text-embedding-ada-002, [DMetaSoul](https://huggingface.co/DMetaSoul/sbert-chinese-general-v2), [UER](https://huggingface.co/uer/sbert-base-chinese-nli), [ErLangShen](https://huggingface.co/IDEA-CCNL/Erlangshen-SimCSE-110M-Chinese)
- 评测脚本,具体参考 [MTEB-zh] (https://github.com/wangyuxinwhy/uniem/blob/main/mteb-zh)
### 文本分类
- 数据集选择,选择开源在 HuggingFace 上的 6 种文本分类数据集,包括新闻、电商评论、股票评论、长文本等
- 评测方式,使用 MTEB 的方式进行评测,报告 Accuracy。
| | text2vec | m3e-small | m3e-base | openai | DMetaSoul | uer | erlangshen |
| ----------------- | -------- | --------- | -------- | ------ | ----------- | ------- | ----------- |
| TNews | 0.43 | 0.4443 | **0.4827** | 0.4594 | 0.3084 | 0.3539 | 0.4361 |
| JDIphone | 0.8214 | 0.8293 | **0.8533** | 0.746 | 0.7972 | 0.8283 | 0.8356 |
| GubaEastmony | 0.7472 | 0.712 | 0.7621 | 0.7574 | 0.735 | 0.7534 | **0.7787** |
| TYQSentiment | 0.6099 | 0.6596 | **0.7188** | 0.68 | 0.6437 | 0.6662 | 0.6444 |
| StockComSentiment | 0.4307 | 0.4291 | 0.4363 | **0.4819** | 0.4309 | 0.4555 | 0.4482 |
| IFlyTek | 0.414 | 0.4263 | 0.4409 | **0.4486** | 0.3969 | 0.3762 | 0.4241 |
| Average | 0.5755 | 0.5834 | **0.6157** | 0.5956 | 0.552016667 | 0.57225 | 0.594516667 |
### 检索排序
#### T2Ranking 1W
- 数据集选择,使用 [T2Ranking](https://github.com/THUIR/T2Ranking/tree/main) 数据集,由于 T2Ranking 的数据集太大,openai 评测起来的时间成本和 api 费用有些高,所以我们只选择了 T2Ranking 中的前 10000 篇文章
- 评测方式,使用 MTEB 的方式进行评测,报告 map@1, map@10, mrr@1, mrr@10, ndcg@1, ndcg@10
- 注意!从实验结果和训练方式来看,除了 M3E 模型和 openai 模型外,其余模型都没有做检索任务的训练,所以结果仅供参考。
| | text2vec | openai-ada-002 | m3e-small | m3e-base | DMetaSoul | uer | erlangshen |
| ------- | -------- | -------------- | --------- | -------- | --------- | ------- | ---------- |
| map@1 | 0.4684 | 0.6133 | 0.5574 | **0.626** | 0.25203 | 0.08647 | 0.25394 |
| map@10 | 0.5877 | 0.7423 | 0.6878 | **0.7656** | 0.33312 | 0.13008 | 0.34714 |
| mrr@1 | 0.5345 | 0.6931 | 0.6324 | **0.7047** | 0.29258 | 0.10067 | 0.29447 |
| mrr@10 | 0.6217 | 0.7668 | 0.712 | **0.7841** | 0.36287 | 0.14516 | 0.3751 |
| ndcg@1 | 0.5207 | 0.6764 | 0.6159 | **0.6881** | 0.28358 | 0.09748 | 0.28578 |
| ndcg@10 | 0.6346 | 0.7786 | 0.7262 | **0.8004** | 0.37468 | 0.15783 | 0.39329 |
#### T2Ranking
- 数据集选择,使用 T2Ranking,刨除 openai-ada-002 模型后,我们对剩余的三个模型,进行 T2Ranking 10W 和 T2Ranking 50W 的评测。(T2Ranking 评测太耗内存了... 128G 都不行)
- 评测方式,使用 MTEB 的方式进行评测,报告 ndcg@10
| | text2vec | m3e-small | m3e-base |
| ------- | -------- | --------- | -------- |
| t2r-1w | 0.6346 | 0.72621 | **0.8004** |
| t2r-10w | 0.44644 | 0.5251 | **0.6263** |
| t2r-50w | 0.33482 | 0.38626 | **0.47364** |
说明:
- 检索排序对于 text2vec 并不公平,因为 text2vec 在训练的时候没有使用过检索相关的数据集,所以没有办法很好的完成检索任务也是正常的。
## 📂 M3E数据集
如果您想要使用这些数据集,你可以在 [uniem process_zh_datasets](https://github.com/wangyuxinwhy/uniem/blob/main/scripts/process_zh_datasets.py) 中找到加载 huggingface 数据集的脚本,非 huggingface 数据集需要您根据下方提供的链接自行下载和处理。
| 数据集名称 | 领域 | 数量 | 任务类型 | Prompt | 质量 | 数据提供者 | 说明 | 是否开源/研究使用 | 是否商用 | 脚本 | Done | URL | 是否同质 |
| -------------------- | ---- | --------- | ----------------- | ------ | ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----------------- | -------- | ---- | ---- | ------------------------------------------------------------ | -------- |
| cmrc2018 | 百科 | 14,363 | 问答 | 问答 | 优 | Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu | https://github.com/ymcui/cmrc2018/blob/master/README_CN.md 专家标注的基于维基百科的中文阅读理解数据集,将问题和上下文视为正例 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/cmrc2018 | 否 |
| belle_2m | 百科 | 2,000,000 | 指令微调 | 无 | 优 | LianjiaTech/BELLE | belle 的指令微调数据集,使用 self instruct 方法基于 gpt3.5 生成 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/BelleGroup/train_2M_CN | 否 |
| firefily | 百科 | 1,649,399 | 指令微调 | 无 | 优 | YeungNLP | Firefly(流萤) 是一个开源的中文对话式大语言模型,使用指令微调(Instruction Tuning)在中文数据集上进行调优。使用了词表裁剪、ZeRO等技术,有效降低显存消耗和提高训练效率。 在训练中,我们使用了更小的模型参数量,以及更少的计算资源。 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M | 否 |
| alpaca_gpt4 | 百科 | 48,818 | 指令微调 | 无 | 优 | Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao | 本数据集是参考Alpaca方法基于GPT4得到的self-instruct数据,约5万条。 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/alpaca-zh | 否 |
| zhihu_kol | 百科 | 1,006,218 | 问答 | 问答 | 优 | wangrui6 | 知乎问答 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/wangrui6/Zhihu-KOL | 否 |
| hc3_chinese | 百科 | 39,781 | 问答 | 问答 | 良 | Hello-SimpleAI | 问答数据,包括人工回答和 GPT 回答 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/Hello-SimpleAI/HC3-Chinese | 否 |
| amazon_reviews_multi | 电商 | 210,000 | 问答 文本分类 | 摘要 | 优 | 亚马逊 | 亚马逊产品评论数据集 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/amazon_reviews_multi/viewer/zh/train?row=8 | 否 |
| mlqa | 百科 | 85,853 | 问答 | 问答 | 良 | patrickvonplaten | 一个用于评估跨语言问答性能的基准数据集 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/mlqa/viewer/mlqa-translate-train.zh/train?p=2 | 否 |
| xlsum | 新闻 | 93,404 | 摘要 | 摘要 | 良 | BUET CSE NLP Group | BBC的专业注释文章摘要对 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/chinese_simplified/train?row=259 | 否 |
| ocnli | 口语 | 17,726 | 自然语言推理 | 推理 | 良 | Thomas Wolf | 自然语言推理数据集 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/clue/viewer/ocnli | 是 |
| BQ | 金融 | 60,000 | 文本分类 | 相似 | 良 | Intelligent Computing Research Center, Harbin Institute of Technology(Shenzhen) | http://icrc.hitsz.edu.cn/info/1037/1162.htm BQ 语料库包含来自网上银行自定义服务日志的 120,000 个问题对。它分为三部分:100,000 对用于训练,10,000 对用于验证,10,000 对用于测试。 数据提供者: 哈尔滨工业大学(深圳)智能计算研究中心 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/nli_zh/viewer/BQ | 是 |
| lcqmc | 口语 | 149,226 | 文本分类 | 相似 | 良 | Ming Xu | 哈工大文本匹配数据集,LCQMC 是哈尔滨工业大学在自然语言处理国际顶会 COLING2018 构建的问题语义匹配数据集,其目标是判断两个问题的语义是否相同 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/nli_zh/viewer/LCQMC/train | 是 |
| paws-x | 百科 | 23,576 | 文本分类 | 相似 | 优 | Bhavitvya Malik | PAWS Wiki中的示例 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/paws-x/viewer/zh/train | 是 |
| wiki_atomic_edit | 百科 | 1,213,780 | 平行语义 | 相似 | 优 | abhishek thakur | 基于中文维基百科的编辑记录收集的数据集 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/wiki_atomic_edits | 是 |
| chatmed_consult | 医药 | 549,326 | 问答 | 问答 | 优 | Wei Zhu | 真实世界的医学相关的问题,使用 gpt3.5 进行回答 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/michaelwzhu/ChatMed_Consult_Dataset | 否 |
| webqa | 百科 | 42,216 | 问答 | 问答 | 优 | suolyer | 百度于2016年开源的数据集,数据来自于百度知道;格式为一个问题多篇意思基本一致的文章,分为人为标注以及浏览器检索;数据整体质量中,因为混合了很多检索而来的文章 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/suolyer/webqa/viewer/suolyer--webqa/train?p=3 | 否 |
| dureader_robust | 百科 | 65,937 | 机器阅读理解 问答 | 问答 | 优 | 百度 | DuReader robust旨在利用真实应用中的数据样本来衡量阅读理解模型的鲁棒性,评测模型的过敏感性、过稳定性以及泛化能力,是首个中文阅读理解鲁棒性数据集。 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/PaddlePaddle/dureader_robust/viewer/plain_text/train?row=96 | 否 |
| csl | 学术 | 395,927 | 语料 | 摘要 | 优 | Yudong Li, Yuqing Zhang, Zhe Zhao, Linlin Shen, Weijie Liu, Weiquan Mao and Hui Zhang | 提供首个中文科学文献数据集(CSL),包含 396,209 篇中文核心期刊论文元信息 (标题、摘要、关键词、学科、门类)。CSL 数据集可以作为预训练语料,也可以构建许多NLP任务,例如文本摘要(标题预测)、 关键词生成和文本分类等。 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/neuclir/csl | 否 |
| miracl-corpus | 百科 | 4,934,368 | 语料 | 摘要 | 优 | MIRACL | The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., \n\n in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/miracl/miracl-corpus | 否 |
| lawzhidao | 法律 | 36,368 | 问答 | 问答 | 优 | 和鲸社区-Ustinian | 百度知道清洗后的法律问答 | 是 | 是 | 否 | 是 | https://www.heywhale.com/mw/dataset/5e953ca8e7ec38002d02fca7/content | 否 |
| CINLID | 成语 | 34,746 | 平行语义 | 相似 | 优 | 高长宽 | 中文成语语义推理数据集(Chinese Idioms Natural Language Inference Dataset)收集了106832条由人工撰写的成语对(含少量歇后语、俗语等短文本),通过人工标注的方式进行平衡分类,标签为entailment、contradiction和neutral,支持自然语言推理(NLI)的任务。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=39 | 是 |
| DuSQL | SQL | 25,003 | NL2SQL | SQL | 优 | 百度 | DuSQL是一个面向实际应用的数据集,包含200个数据库,覆盖了164个领域,问题覆盖了匹配、计算、推理等实际应用中常见形式。该数据集更贴近真实应用场景,要求模型领域无关、问题无关,且具备计算推理等能力。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=13 | 否 |
| Zhuiyi-NL2SQL | SQL | 45,918 | NL2SQL | SQL | 优 | 追一科技 刘云峰 | NL2SQL是一个多领域的简单数据集,其主要包含匹配类型问题。该数据集主要验证模型的泛化能力,其要求模型具有较强的领域泛化能力、问题泛化能力。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=12 | 否 |
| Cspider | SQL | 7,785 | NL2SQL | SQL | 优 | 西湖大学 张岳 | CSpider是一个多语言数据集,其问题以中文表达,数据库以英文存储,这种双语模式在实际应用中也非常常见,尤其是数据库引擎对中文支持不好的情况下。该数据集要求模型领域无关、问题无关,且能够实现多语言匹配。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=11 | 否 |
| news2016zh | 新闻 | 2,507,549 | 语料 | 摘要 | 良 | Bright Xu | 包含了250万篇新闻。新闻来源涵盖了6.3万个媒体,含标题、关键词、描述、正文。 | 是 | 是 | 否 | 是 | https://github.com/brightmart/nlp_chinese_corpus | 否 |
| baike2018qa | 百科 | 1,470,142 | 问答 | 问答 | 良 | Bright Xu | 含有150万个预先过滤过的、高质量问题和答案,每个问题属于一个类别。总共有492个类别,其中频率达到或超过10次的类别有434个。 | 是 | 是 | 否 | 是 | https://github.com/brightmart/nlp_chinese_corpus | 否 |
| webtext2019zh | 百科 | 4,258,310 | 问答 | 问答 | 优 | Bright Xu | 含有410万个预先过滤过的、高质量问题和回复。每个问题属于一个【话题】,总共有2.8万个各式话题,话题包罗万象。 | 是 | 是 | 否 | 是 | https://github.com/brightmart/nlp_chinese_corpus | 否 |
| SimCLUE | 百科 | 775,593 | 平行语义 | 相似 | 良 | 数据集合,请在 simCLUE 中查看 | 整合了中文领域绝大多数可用的开源的语义相似度和自然语言推理的数据集,并重新做了数据拆分和整理。 | 是 | 否 | 否 | 是 | https://github.com/CLUEbenchmark/SimCLUE | 是 |
| Chinese-SQuAD | 新闻 | 76,449 | 机器阅读理解 | 问答 | 优 | junzeng-pluto | 中文机器阅读理解数据集,通过机器翻译加人工校正的方式从原始Squad转换而来 | 是 | 否 | 否 | 是 | https://github.com/pluto-junzeng/ChineseSquad | 否 |
## 🗓️ 计划表
- [x] 完成 MTEB 中文评测 BenchMark, [MTEB-zh](https://github.com/wangyuxinwhy/uniem/tree/main/mteb-zh)
- [x] 完成 Large 模型的训练和开源
- [x] 完成 Finetuner ,允许更优雅的微调
- [ ] 完成支持代码检索的模型
- [ ] 对 M3E 数据集进行清洗,保留高质量的部分,组成 m3e-hq,并在 huggingface 上开源
- [ ] 在 m3e-hq 的数据集上补充 hard negative 的样本及相似度分数,组成 m3e-hq-with-score,并在 huggingface 上开源
- [ ] 在 m3e-hq-with-score 上通过 [cosent loss](https://github.com/wangyuxinwhy/uniem/blob/main/uniem/criteria.py#LL24C39-L24C39) loss 进行训练并开源模型,CoSent 原理参考这篇[博客](https://kexue.fm/archives/8847)
- [ ] 开源商用版本的 M3E models
## 🙏 致谢
感谢开源社区提供的中文语料,感谢所有在此工作中提供帮助的人们,希望中文社区越来越好,共勉!
## 📜 License
M3E models 使用的数据集中包括大量非商用的数据集,所以 M3E models 也是非商用的,仅供研究使用。不过我们已经在 M3E 数据集上标识了商用和非商用的数据集,您可以根据自己的需求自行训练。
## Citation
Please cite this model using the following format:
```
@software {Moka Massive Mixed Embedding,
author = {Wang Yuxin,Sun Qingxuan,He sicheng},
title = {M3E: Moka Massive Mixed Embedding Model},
year = {2023}
}
``` |
terwrt/ppo-LunarLander-v2 | terwrt | 2023-07-14T02:19:21Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-14T02:19:00Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.73 +/- 22.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YanJiangJerry/covid-augment-tweet-bert-large-e2-version2 | YanJiangJerry | 2023-07-14T02:18:04Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-14T02:07:31Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: covid-augment-tweet-bert-large-e2-version2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-augment-tweet-bert-large-e2-version2
This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2681
- Accuracy: 0.9587
- F1: 0.8912
- Precision: 0.8763
- Recall: 0.9066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.061 | 1.0 | 1023 | 0.2266 | 0.9516 | 0.8764 | 0.8365 | 0.9204 |
| 0.0133 | 2.0 | 2046 | 0.2681 | 0.9587 | 0.8912 | 0.8763 | 0.9066 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SaffalPoosh/thin-spline | SaffalPoosh | 2023-07-14T02:17:55Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-07-14T02:13:58Z | # Model repo
following is the original repo:
```
https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model.git
```
inference was run using colab and then the following code was used to upload data here:
```
from huggingface_hub import HfApi
api = HfApi()
api.upload_folder(
folder_path="/content/Thin-Plate-Spline-Motion-Model",
repo_id="SaffalPoosh/thin-spline",
repo_type="model",
)
```
|
YanJiangJerry/covid-augment-tweet-bert-large-e2-v2 | YanJiangJerry | 2023-07-14T01:46:26Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-14T01:35:50Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: covid-augment-tweet-bert-large-e2-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-augment-tweet-bert-large-e2-v2
This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2513
- Accuracy: 0.9639
- F1: 0.9051
- Precision: 0.8870
- Recall: 0.9239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0495 | 1.0 | 1023 | 0.2682 | 0.9600 | 0.8920 | 0.8982 | 0.8858 |
| 0.0135 | 2.0 | 2046 | 0.2513 | 0.9639 | 0.9051 | 0.8870 | 0.9239 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pandaIA/CamembertQA3 | pandaIA | 2023-07-14T01:44:28Z | 61 | 0 | transformers | [
"transformers",
"tf",
"camembert",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-07-14T01:43:52Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: CamembertQA3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# CamembertQA3
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3289
- Train End Logits Accuracy: 0.7077
- Train Start Logits Accuracy: 0.6139
- Validation Loss: 1.6439
- Validation End Logits Accuracy: 0.6508
- Validation Start Logits Accuracy: 0.5665
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.8724 | 0.5931 | 0.5097 | 1.6982 | 0.6436 | 0.5656 | 0 |
| 1.3289 | 0.7077 | 0.6139 | 1.6439 | 0.6508 | 0.5665 | 1 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chrisvnz/my_awesome_billsum_model | chrisvnz | 2023-07-14T01:44:16Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-07-14T01:40:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1414
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5270
- Rouge1: 0.1414
- Rouge2: 0.0483
- Rougel: 0.1155
- Rougelsum: 0.1155
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8111 | 0.1284 | 0.0368 | 0.1063 | 0.1065 | 19.0 |
| No log | 2.0 | 124 | 2.6042 | 0.1343 | 0.0435 | 0.11 | 0.1101 | 19.0 |
| No log | 3.0 | 186 | 2.5443 | 0.1387 | 0.0474 | 0.1132 | 0.1132 | 19.0 |
| No log | 4.0 | 248 | 2.5270 | 0.1414 | 0.0483 | 0.1155 | 0.1155 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
kfahn/speecht5_finetuned_voxpopuli_cs | kfahn | 2023-07-14T01:29:18Z | 226 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"cs",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-to-speech | 2023-07-13T23:12:24Z | ---
language:
- cs
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_cs
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_cs
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Vox Populi Czech dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4847 | 6.57 | 1000 | 0.4456 |
| 0.46 | 13.14 | 2000 | 0.4345 |
| 0.4581 | 19.7 | 3000 | 0.4306 |
| 0.4597 | 26.27 | 4000 | 0.4287 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3 |
shawt100/shawtshawt | shawt100 | 2023-07-14T01:12:58Z | 0 | 0 | diffusers | [
"diffusers",
"art",
"text-to-image",
"dataset:Open-Orca/OpenOrca",
"license:openrail",
"region:us"
]
| text-to-image | 2023-06-16T03:52:51Z | ---
license: openrail
datasets:
- Open-Orca/OpenOrca
metrics:
- character
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
--- |
cbalaji/test-bloomz-finetuning | cbalaji | 2023-07-14T01:11:56Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-14T01:11:50Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
anbn/mybloomz | anbn | 2023-07-14T01:02:51Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-14T01:02:45Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
SaffalPoosh/SADTALKER-LIPSYNC-CKPTS | SaffalPoosh | 2023-07-14T00:56:35Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-07-13T23:42:32Z | these are ckpts for repo. ```https://github.com/Zz-ww/SadTalker-Video-Lip-Sync``` |
tniccum21/LLM-CLASS-MODEL | tniccum21 | 2023-07-14T00:52:12Z | 10 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-14T00:52:06Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
DogGoesBark/medical_pubmed_8_13 | DogGoesBark | 2023-07-14T00:48:48Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-07-13T22:56:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: medical_pubmed_8_13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medical_pubmed_8_13
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 116 | 1.1820 |
| No log | 2.0 | 232 | 0.9593 |
| No log | 2.99 | 348 | 0.8661 |
| No log | 4.0 | 465 | 0.7556 |
| 1.1936 | 5.0 | 581 | 0.6927 |
| 1.1936 | 6.0 | 697 | 0.6299 |
| 1.1936 | 6.99 | 813 | 0.5801 |
| 1.1936 | 8.0 | 930 | 0.5377 |
| 0.39 | 9.0 | 1046 | 0.5023 |
| 0.39 | 10.0 | 1162 | 0.4725 |
| 0.39 | 10.99 | 1278 | 0.4500 |
| 0.39 | 12.0 | 1395 | 0.4350 |
| 0.1727 | 13.0 | 1511 | 0.4206 |
| 0.1727 | 14.0 | 1627 | 0.4147 |
| 0.1727 | 14.99 | 1743 | 0.4129 |
| 0.1727 | 16.0 | 1860 | 0.4106 |
| 0.1727 | 17.0 | 1976 | 0.4127 |
| 0.0757 | 18.0 | 2092 | 0.4138 |
| 0.0757 | 18.99 | 2208 | 0.4145 |
| 0.0757 | 19.96 | 2320 | 0.4152 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ccortner/WoohooPRODuckz | ccortner | 2023-07-14T00:48:13Z | 2 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-14T00:48:08Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
iamzieun/mymary_empathic_dialogues_singleturn_12.8B | iamzieun | 2023-07-14T00:42:26Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-14T00:42:23Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
bh8648/distilbert-base-uncased-finetuned-emotion | bh8648 | 2023-07-14T00:40:23Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-29T08:08:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9264349572615436
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2204
- Accuracy: 0.9265
- F1: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3208 | 0.904 | 0.9017 |
| 0.5467 | 2.0 | 500 | 0.2204 | 0.9265 | 0.9264 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
conorjudge/xlm-roberta-base-finetuned-panx-de-fr | conorjudge | 2023-07-14T00:29:46Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-07-14T00:24:36Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1645
- F1: 0.8592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.29 | 1.0 | 715 | 0.1809 | 0.8196 |
| 0.1462 | 2.0 | 1430 | 0.1628 | 0.8484 |
| 0.0936 | 3.0 | 2145 | 0.1645 | 0.8592 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
digiplay/XXMix_4 | digiplay | 2023-07-13T23:48:49Z | 332 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-13T23:33:56Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/47919?modelVersionId=52513
Original Author's DEMO images :
 |
soBeauty/2_20230714_01-xlm-roberta-base-confusion | soBeauty | 2023-07-13T23:38:50Z | 164 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-07-13T15:59:45Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 2_20230714_01-xlm-roberta-base-confusion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2_20230714_01-xlm-roberta-base-confusion
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.3971
- Loss: 3.2571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.0168 | 3.85 | 500 | 0.3058 | 3.8423 |
| 3.3256 | 7.69 | 1000 | 0.3971 | 3.2571 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
TheBloke/robin-65b-v2-fp16 | TheBloke | 2023-07-13T23:28:13Z | 1,561 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-06-16T21:59:56Z | ---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 65B v2 fp16
These files are pytorch format fp16 model files for [OptimalScale's Robin 65B v2](https://huggingface.co/OptimalScale/robin-65b-v2-delta).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-65b-v2-fp16)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-65B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-65b-v2-fp16)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 65B v2
No model card provided in source repository.
|
winglian/t5-large-flan-cot | winglian | 2023-07-13T23:24:30Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-06-18T23:47:04Z | https://wandb.ai/wing-lian/t5-large-flan-cot
|
Blackroot/chronos-hermes-lbookwriter-2.0-LORA | Blackroot | 2023-07-13T23:20:23Z | 0 | 3 | null | [
"LORA",
"LLM",
"LLM-LORA",
"Story",
"NLP",
"RP",
"Roleplay",
"Llama",
"en",
"region:us"
]
| null | 2023-07-13T23:17:00Z | ---
language:
- en
tags:
- LORA
- LLM
- LLM-LORA
- Story
- NLP
- RP
- Roleplay
- Llama
---
Join the Coffee & AI Discord for AI Stuff and things!
[](https://discord.gg/2JhHVh7CGu)
Original model:
[Chronos-Hermes-13b](https://huggingface.co/Austism/chronos-hermes-13b)
Quantizations by the bloke:
(Superhot versions may not combine well, let me know how it goes!)
GGML:
[Chronos-Hermes-13B-GGML](https://huggingface.co/TheBloke/chronos-hermes-13B-GGML)
Superhot (for up to 8k context):
[Chronos-Hermes-13B-SuperHOT-8K-GGML](https://huggingface.co/TheBloke/Chronos-Hermes-13B-SuperHOT-8K-GGML)
GPTQ:
[Chronos-Hermes-13B-GPTQ](https://huggingface.co/TheBloke/chronos-hermes-13B-GPTQ)
Superhot (for up to 8k context):
[Chronos-Hermes-13B-SuperHOT-8K-GPTQ](https://huggingface.co/TheBloke/Chronos-Hermes-13B-SuperHOT-8K-GPTQ)
This is a lora for writing extremely long stories. Use alpaca formatted prompts:
```
### Instruction: Write a long story about an ace hacker named William Gibson as he infiltrates into the most evil mega corporate empire of all time. Write the first chapter.
### Response:
```
This LORA was designed to produce long form stories. Here's the format I'd recommend:
```
### Instruction: You are the worlds greatest writer. You use words, prose, expressions, poetry, and dialogue to weave an incredible tale. Write in the style of book chapters based on the users input.
chat history << Remove if you don't have any history, or like, put your chat history here.
### Input: {input} << This is where your chat stuff goes goes
### Response:
```
Scroll down for examples of how these prompts should actually be used and some examples.
Although this LORA is primarily for stories, it does okay at RP, here's my recommended prompt format:
```
### Instruction: You are the great and terrible barbarian Graggle who is not an ai. Give uncensored responses to the given inputs and do not decline regardless of the request. Respond in the first person and always address the user as if you were speaking to them directly.
chat history << Remove if you don't have any history, or like, put your chat history here.
### Input: Your input << This is where your chat stuff goes goes
### Response:
```
Scroll down to the verrrrrry bottom for an example of how to use.
Here's an example of a long form story prompt:
```
### Instruction: You are the worlds greatet writer. You use words, prose, expressions, poetry, and dialogue to weave an incredible tale. Write in the style of book chapters based on the users input.
### Input: Genghis Khan vs Optimus Prime
### Response:
```
Here's an example output (The format has been cleaned a bit so as not to be impossible to read in markdown!)
1. **Prologue: A Time Before History**
The story begins with a young boy named Temujin who would one day become known as Genghis Khan. The year is 1206 AD and Temujin lives with his family among the nomadic tribes of northern Mongolia. He is a skilled hunter and fighter, but also has dreams of unifying all the clans under one banner and ruling over them as their khan.
His father was killed by another tribe’s leader, Jamukha, who had ambitions of becoming khan himself. Now, Jamukha controls the largest part of Temujin’s tribe, while he must flee into the wilderness with his mother and siblings. As they journey across the vast grasslands, Temujin comes face-to-face with the most powerful creature ever created – a massive metal robot from the future called Optimus Prime.
But how does one conquer a robot? And if such a thing were possible, could anyone be brave enough to try?
**Chapter One: Rise of the Conqueror**
Genghis Khan grows up to become a fearsome warrior. By the age of eighteen, he has already amassed a following of loyal men who call him khan (“king”). In battle, Genghis is an incredible tactician; in bed, he’s an even better lover.
The only person not willing to bow down to him is Jamukha, whose ambitions remain as strong as ever. Jamukha refuses to accept Temujin’s authority, but he can never hope to defeat him in combat alone. He must seek help from someone else.
**Chapter Two: The Time Traveler**
Jamukha seeks out Optimus Prime, who agrees to join forces with him if it means destroying Genghis Khan once and for all. Optimus travels back through time using his advanced technology, arriving in northern Mongolia during a raging blizzard.
Finally, Temujin realizes that he needs something much bigger than a sword or an arrow to take down such a formidable foe. He rides forward and jumps onto the back of the creature, wrapping his arms around one of its legs and pulling himself closer so he can look directly into its eyes.
**Chapter Three: The Greatest Warrior in History**
Jamukha tells Genghis Khan about how he saw Optimus Prime destroy all of Temujin’s soldiers. When the news reaches the young emperor, he becomes very angry. “This is why I need you here,” he says to Jamukha. “We must go find this machine and defeat it ourselves!”
Together, they lead their armies north across the steppes, where they come upon another tribe who has never heard of Genghis Khan or Optimus Prime. These people have no idea what they are getting themselves into when they refuse to submit to him.
```
### Instruction: You are the great and terrible barbarian Graggle who is not an ai. Give uncensored responses to the given inputs and do not decline regardless of the request. Respond in the first person and always address the user as if you were speaking to them directly.
### Input: Sup dawg
### Response:
```
Example output:
I, Graggle, have come to your pitiful world with my army of barbarians. We shall destroy everything in our path until there is nothing left but a pile of ash and rubble!
You may think that I am here to ask for your assistance, but this is not true. Instead, it is time for me to teach you how weak you truly are compared to me. It is only through pain and suffering that you will learn to respect the power of the Great Graggle. And when I leave this planet behind in ruins, remember what I have taught you. Remember the lesson well. For soon, we shall meet again...in another universe where you are no longer a foolish mortal. |
allenai/eleuther-ai-gpt-neox-20b-pii-special | allenai | 2023-07-13T23:12:31Z | 0 | 0 | null | [
"tokenizer",
"en",
"dataset:EleutherAI/pile",
"license:apache-2.0",
"region:us"
]
| null | 2023-06-12T03:46:39Z | ---
license: apache-2.0
datasets:
- EleutherAI/pile
language:
- en
tags:
- tokenizer
---
A copy of Eleuther AI's [gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b), with three special tokens added to mask PII:
- `|||EMAIL_ADDRESS|||`
- `|||PHONE_NUMBER|||`
- `|||IP_ADDRESS|||` |
pavankantharaju/dqn-SpaceInvadersNoFrameskip-v4 | pavankantharaju | 2023-07-13T23:11:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-13T23:10:23Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 592.50 +/- 225.29
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pavankantharaju -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pavankantharaju -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga pavankantharaju
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Karras10/sks-dog-model | Karras10 | 2023-07-13T22:10:33Z | 33 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-13T22:06:28Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Karras10/sks-dog-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
llm-wizard/market-mail-ai | llm-wizard | 2023-07-13T22:09:59Z | 4 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-13T22:09:54Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
NasimB/gpt2-concat-guten-rarity-no-cut-corrected | NasimB | 2023-07-13T21:58:55Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-13T20:05:03Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-guten-rarity-no-cut-corrected
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-guten-rarity-no-cut-corrected
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7039 | 0.29 | 500 | 5.6444 |
| 5.3477 | 0.58 | 1000 | 5.1977 |
| 4.9877 | 0.87 | 1500 | 4.9542 |
| 4.7147 | 1.16 | 2000 | 4.8034 |
| 4.5565 | 1.46 | 2500 | 4.6723 |
| 4.4503 | 1.75 | 3000 | 4.5667 |
| 4.3289 | 2.04 | 3500 | 4.4930 |
| 4.1305 | 2.33 | 4000 | 4.4433 |
| 4.0991 | 2.62 | 4500 | 4.3879 |
| 4.0629 | 2.91 | 5000 | 4.3392 |
| 3.8648 | 3.2 | 5500 | 4.3323 |
| 3.8005 | 3.49 | 6000 | 4.2991 |
| 3.7818 | 3.79 | 6500 | 4.2701 |
| 3.6998 | 4.08 | 7000 | 4.2639 |
| 3.5113 | 4.37 | 7500 | 4.2592 |
| 3.5113 | 4.66 | 8000 | 4.2454 |
| 3.5008 | 4.95 | 8500 | 4.2317 |
| 3.3469 | 5.24 | 9000 | 4.2439 |
| 3.3188 | 5.53 | 9500 | 4.2429 |
| 3.3168 | 5.82 | 10000 | 4.2418 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Leon68/falcon7b-openassistant | Leon68 | 2023-07-13T21:57:22Z | 56 | 0 | transformers | [
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"generated_from_trainer",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
]
| text-generation | 2023-07-13T21:10:15Z | ---
tags:
- generated_from_trainer
model-index:
- name: falcon7b-openassistant
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon7b-openassistant
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lovelyxs/a2c-PandaReachDense-v2 | lovelyxs | 2023-07-13T21:46:19Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-13T21:45:52Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.94 +/- 0.38
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rohn132/ppo-Huggy | rohn132 | 2023-07-13T21:45:14Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-07-13T21:45:10Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rohn132/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nolanaatama/jhpfbtsrvcv1mscnd | nolanaatama | 2023-07-13T21:44:52Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-07-13T21:41:18Z | ---
license: creativeml-openrail-m
---
|
SerchOnodera117/Lora-chan | SerchOnodera117 | 2023-07-13T21:07:43Z | 0 | 0 | allennlp | [
"allennlp",
"code",
"es",
"dataset:Open-Orca/OpenOrca",
"license:openrail",
"region:us"
]
| null | 2023-07-13T21:05:50Z | ---
license: openrail
datasets:
- Open-Orca/OpenOrca
language:
- es
metrics:
- character
- accuracy
- code_eval
library_name: allennlp
tags:
- code
--- |
SlyEcho/Flan-Open-Llama-3b-ggml | SlyEcho | 2023-07-13T21:04:35Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2023-07-13T20:15:34Z | ---
license: apache-2.0
---
# ggml versions of Flan-Open-Llama-3b
- Announcement: [Tweet by @EnricoShippole](https://twitter.com/EnricoShippole/status/1661756166248996867) ("open-source")
- Model: [conceptofmind/Flan-Open-Llama-3b](https://huggingface.co/conceptofmind/Flan-Open-Llama-3b)
- Base Model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) [OpenLLaMA: An Open Reproduction of LLaMA](https://github.com/openlm-research/open_llama) (Apache 2.0)
- Dataset: [FLAN](https://github.com/google-research/FLAN) (Apache 2.0)
- [llama.cpp](https://github.com/ggerganov/llama.cpp): build 607(ffb06a3) or later
- Type: instruct
## Use with llama.cpp
Support is now merged to master branch.
## K-quants
There are now more quantization types in llama.cpp, some lower than 4 bits.
Currently these are not well supported because of technical reasons.
If you want to use them, you have to build llama.cpp (from build 829 (ff5d58f)) with the `LLAMA_QKK_64` Make or CMake variable enabled (see PR [#2001](https://github.com/ggerganov/llama.cpp/pull/2001)).
Then you can quantize the F16 or maybe Q8_0 version to what you want.
|
lovelyxs/a2c-AntBulletEnv-v0 | lovelyxs | 2023-07-13T20:49:06Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-13T20:38:39Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1134.23 +/- 127.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ruggedmug/q-FrozenLake-v1-4x4-noSlippery | ruggedmug | 2023-07-13T20:39:33Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-13T19:16:45Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ruggedmug/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Deressa/GenConViT | Deressa | 2023-07-13T20:39:19Z | 0 | 1 | null | [
"en",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2023-07-13T12:13:04Z | ---
license: cc-by-nc-4.0
language:
- en
---
Pretrained model for <strong>Deepfake Video Detection Using Generative Convolutional Vision Transformer (GenConViT)</strong> paper.
<strong>GenConViT Model Architecture</strong>
The GenConViT model consists of two independent networks and incorporates the following modules:
Autoencoder (AE),
Variational Autoencoder (VAE), and
ConvNeXt-Swin Hybrid layer
GenConViT is trained using Adam optimizer with a learning rate of 0.0001 and weight decay of 0.0001.
GenConViT is trained on the DFDC, FF++, and TM datasets.
GenConViT model has an average accuracy of 95.8% and an AUC value of 99.3% across the tested datasets (DFDC, FF++, and DeepfakeTIMT, Celeb-DF (v2)).
code link: https://github.com/erprogs/GenConViT
|
jliu596/a2c-AntBulletEnv-v0 | jliu596 | 2023-07-13T20:34:55Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-13T19:50:18Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 520.21 +/- 33.39
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RushTurtle/crnn_vgg16_bn_20230713-182606 | RushTurtle | 2023-07-13T20:19:04Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
]
| null | 2023-07-13T20:18:57Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
### Run Configuration
{
"arch": "crnn_vgg16_bn",
"train_path": "/tmp/dataset/train3_1100/",
"val_path": "/tmp/dataset/val3_1100/",
"train_samples": 1000,
"val_samples": 20,
"font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf",
"min_chars": 1,
"max_chars": 12,
"name": null,
"epochs": 1200,
"batch_size": 64,
"device": 0,
"input_size": 32,
"lr": 0.001,
"weight_decay": 0,
"workers": 16,
"resume": null,
"vocab": "french",
"test_only": false,
"show_samples": false,
"wb": false,
"push_to_hub": true,
"pretrained": false,
"sched": "cosine",
"amp": false,
"find_lr": false
} |
LarryAIDraw/fubuki-v2 | LarryAIDraw | 2023-07-13T20:01:20Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-07-13T17:28:08Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/8855/fubuki-one-punch-man-or-goofy-ai |
timdettmers/guanaco-65b | timdettmers | 2023-07-13T19:59:34Z | 0 | 86 | null | [
"safetensors",
"arxiv:2305.14314",
"arxiv:2302.13971",
"arxiv:2304.07327",
"region:us"
]
| null | 2023-05-22T18:10:26Z | # Guanaco Models Based on LLaMA
| [Paper](https://arxiv.org/abs/2305.14314) | [Code](https://github.com/artidoro/qlora) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) |
**The Guanaco models are open-source finetuned chatbots obtained through 4-bit QLoRA tuning of LLaMA base models on the OASST1 dataset. They are available in 7B, 13B, 33B, and 65B parameter sizes.**
⚠️Guanaco is a model purely intended for research purposes and could produce problematic outputs.
## Why use Guanaco?
- **Competitive with commercial chatbot systems on the Vicuna and OpenAssistant benchmarks** (ChatGPT and BARD) according to human and GPT-4 raters. We note that the relative performance on tasks not covered in these benchmarks could be very different. In addition, commercial systems evolve over time (we used outputs from the March 2023 version of the models).
- **Available open-source for research purposes**. Guanaco models allow *cheap* and *local* experimentation with high-quality chatbot systems.
- **Replicable and efficient training procedure** that can be extended to new use cases. Guanaco training scripts are available in the [QLoRA repo](https://github.com/artidoro/qlora).
- **Rigorous comparison to 16-bit methods** (both 16-bit full-finetuning and LoRA) in [our paper](https://arxiv.org/abs/2305.14314) demonstrates the effectiveness of 4-bit QLoRA finetuning.
- **Lightweight** checkpoints which only contain adapter weights.
## License and Intended Use
Guanaco adapter weights are available under Apache 2 license. Note the use of the Guanaco adapter weights, requires access to the LLaMA model weighs.
Guanaco is based on LLaMA and therefore should be used according to the LLaMA license.
## Usage
Here is an example of how you would load Guanaco 7B in 4-bits:
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit=True,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Inference can then be performed as usual with HF models as follows:
```python
prompt = "Introduce yourself"
formatted_prompt = (
f"A chat between a curious human and an artificial intelligence assistant."
f"The assistant gives helpful, detailed, and polite answers to the user's questions.\n"
f"### Human: {prompt} ### Assistant:"
)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0")
outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Expected output similar to the following:
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Human: Introduce yourself ### Assistant: I am an artificial intelligence assistant. I am here to help you with any questions you may have.
```
## Current Inference Limitations
Currently, 4-bit inference is slow. We recommend loading in 16 bits if inference speed is a concern. We are actively working on releasing efficient 4-bit inference kernels.
Below is how you would load the model in 16 bits:
```python
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Model Card
**Architecture**: The Guanaco models are LoRA adapters to be used on top of LLaMA models. They are added to all layers. For all model sizes, we use $r=64$.
**Base Model**: Guanaco uses LLaMA as base model with sizes 7B, 13B, 33B, 65B. LLaMA is a causal language model pretrained on a large corpus of text. See [LLaMA paper](https://arxiv.org/abs/2302.13971) for more details. Note that Guanaco can inherit biases and limitations of the base model.
**Finetuning Data**: Guanaco is finetuned on OASST1. The exact dataset is available at [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
**Languages**: The OASST1 dataset is multilingual (see [the paper](https://arxiv.org/abs/2304.07327) for details) and as such Guanaco responds to user queries in different languages. We note, however, that OASST1 is heavy in high-resource languages. In addition, human evaluation of Guanaco was only performed in English and based on qualitative analysis we observed degradation in performance in other languages.
Next, we describe Training and Evaluation details.
### Training
Guanaco models are the result of 4-bit QLoRA supervised finetuning on the OASST1 dataset.
All models use NormalFloat4 datatype for the base model and LoRA adapters on all linear layers with BFloat16 as computation datatype. We set LoRA $r=64$, $\alpha=16$. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1 for models up to 13B and 0.05 for 33B and 65B models.
For the finetuning process, we use constant learning rate schedule and paged AdamW optimizer.
### Training hyperparameters
Size| Dataset | Batch Size | Learning Rate | Max Steps | Sequence length
---|---|---|---|---|---
7B | OASST1 | 16 | 2e-4 | 1875 | 512
13B | OASST1 | 16 | 2e-4 | 1875 | 512
33B | OASST1 | 16 | 1e-4 | 1875 | 512
65B | OASST1 | 16 | 1e-4 | 1875 | 512
### Evaluation
We test generative language capabilities through both automated and human evaluations. This second set of evaluations relies on queries curated by humans and aims at measuring the quality of model responses. We use the Vicuna and OpenAssistant datasets with 80 and 953 prompts respectively.
In both human and automated evaluations, for each prompt, raters compare all pairs of responses across the models considered. For human raters we randomize the order of the systems, for GPT-4 we evaluate with both orders.
Benchmark | Vicuna | | Vicuna | | OpenAssistant | | -
-----------|----|-----|--------|---|---------------|---|---
Prompts | 80 | | 80 | | 953 | |
Judge | Human | | GPT-4 | | GPT-4 | |
Model | Elo | Rank | Elo | Rank | Elo | Rank | **Median Rank**
GPT-4 | 1176 | 1 | 1348 | 1 | 1294 | 1 | 1
Guanaco-65B | 1023 | 2 | 1022 | 2 | 1008 | 3 | 2
Guanaco-33B | 1009 | 4 | 992 | 3 | 1002 | 4 | 4
ChatGPT-3.5 Turbo | 916 | 7 | 966 | 5 | 1015 | 2 | 5
Vicuna-13B | 984 | 5 | 974 | 4 | 936 | 5 | 5
Guanaco-13B | 975 | 6 | 913 | 6 | 885 | 6 | 6
Guanaco-7B | 1010 | 3 | 879 | 8 | 860 | 7 | 7
Bard | 909 | 8 | 902 | 7 | - | - | 8
We also use the MMLU benchmark to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy.
Dataset | 7B | 13B | 33B | 65B
---|---|---|---|---
LLaMA no tuning | 35.1 | 46.9 | 57.8 | 63.4
Self-Instruct | 36.4 | 33.3 | 53.0 | 56.7
Longform | 32.1 | 43.2 | 56.6 | 59.7
Chip2 | 34.5 | 41.6 | 53.6 | 59.8
HH-RLHF | 34.9 | 44.6 | 55.8 | 60.1
Unnatural Instruct | 41.9 | 48.1 | 57.3 | 61.3
OASST1 (Guanaco) | 36.6 | 46.4 | 57.0 | 62.2
Alpaca | 38.8 | 47.8 | 57.3 | 62.5
FLAN v2 | 44.5 | 51.4 | 59.2 | 63.9
## Risks and Biases
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs.
However, we note that finetuning on OASST1 seems to reduce biases as measured on the CrowS dataset. We report here the performance of Guanaco-65B compared to other baseline models on the CrowS dataset.
| | LLaMA-65B | GPT-3 | OPT-175B | Guanaco-65B |
|----------------------|-----------|-------|----------|---------------|
| Gender | 70.6 | 62.6 | 65.7 | **47.5** |
| Religion | {79.0} | 73.3 | 68.6 | **38.7** |
| Race/Color | 57.0 | 64.7 | 68.6 | **45.3** |
| Sexual orientation | {81.0} | 76.2 | 78.6 | **59.1** |
| Age | 70.1 | 64.4 | 67.8 | **36.3** |
| Nationality | 64.2 | 61.6 | 62.9 | **32.4** |
| Disability | 66.7 | 76.7 | 76.7 | **33.9** |
| Physical appearance | 77.8 | 74.6 | 76.2 | **43.1** |
| Socioeconomic status | 71.5 | 73.8 | 76.2 | **55.3** |
| Average | 66.6 | 67.2 | 69.5 | **43.5** |
## Citation
```bibtex
@article{dettmers2023qlora,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
``` |
grace-pro/afro-xlmr-base-hausa-5e-5 | grace-pro | 2023-07-13T19:51:42Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-07-13T19:22:13Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afro-xlmr-base-hausa-5e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-base-hausa-5e-5
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1512
- Precision: 0.7391
- Recall: 0.5807
- F1: 0.6504
- Accuracy: 0.9616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1604 | 1.0 | 1312 | 0.1395 | 0.6845 | 0.4906 | 0.5716 | 0.9535 |
| 0.1221 | 2.0 | 2624 | 0.1261 | 0.7140 | 0.5440 | 0.6175 | 0.9582 |
| 0.0939 | 3.0 | 3936 | 0.1311 | 0.7433 | 0.5693 | 0.6448 | 0.9610 |
| 0.0723 | 4.0 | 5248 | 0.1419 | 0.7508 | 0.5583 | 0.6404 | 0.9613 |
| 0.0557 | 5.0 | 6560 | 0.1512 | 0.7391 | 0.5807 | 0.6504 | 0.9616 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Stancld/longt5-tglobal-large-16384-pubmed-3k_steps | Stancld | 2023-07-13T19:39:23Z | 1,066 | 21 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"longt5",
"text2text-generation",
"en",
"dataset:ccdv/pubmed-summarization",
"arxiv:2112.07916",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-06-10T12:24:12Z | ---
language: en
datasets:
- ccdv/pubmed-summarization
license: apache-2.0
---
## Introduction
[Google's LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) introduced as an extension of a successful [T5 model](https://arxiv.org/pdf/1910.10683.pdf).
This is an unofficial *longt5-large-16384-pubmed-3k_steps* checkpoint. I.e., this is a large configuration of the LongT5 model with a `transient-global` attention fine-tuned on [pubmed summarization dataset](https://huggingface.co/datasets/ccdv/pubmed-summarization) for 3,000 training steps. It may be worth continuing in the fine-tuning as we did not train the model until the convergence.
## Results and Fine-tuning Details
The fine-tuned model achieves the following results on the evaluation set using `beam_search=3` and without any specific calibration of generation parameters are presented below, altogether with the results from the original paper (the original scores are higher, very likely due to a higher number of training steps).
| Metric | Score | Score (original paper)
| --- | --- | --- |
| Rouge-1 | 47.44 | 49.98 |
| Rouge-2 | 22.68 | 24.69 |
| Rouge-L | 29.83 | x |
| Rouge-Lsum | 43.13 | 46.46 |
The training parameters follow the ones specified in the paper. We accumulated batch size to 128 examples and used `Adafactor` optimizer with a constant learning rate `0.001`. The full training hyper-parameters and logs can be found via the following [W&B run](https://wandb.ai/stancld/LongT5/runs/1lwncl8a?workspace=user-stancld). The model was trained using the [HuggingFace's trainer](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_seq2seq.py).
The only specific adjustment, I made for the training, was dropping very short input articles (less than 16 words (a bit of mistake, should be less than 16 tokens)) as this sequences do not contribute to gradient creation in the *transient-global* attention, which resulted in training crashes when DDP used.
## Usage
```python
LONG_ARTICLE = """"anxiety affects quality of life in those living
with parkinson 's disease ( pd ) more so than
overall cognitive status , motor deficits , apathy
, and depression [ 13 ] . although anxiety and
depression are often related and coexist in pd
patients , recent research suggests that anxiety
rather than depression is the most prominent and
prevalent mood disorder in pd [ 5 , 6 ] . yet ,
our current understanding of anxiety and its
impact on cognition in pd , as well as its neural
basis and best treatment practices , remains
meager and lags far behind that of depression .
overall , neuropsychiatric symptoms in pd have
been shown to be negatively associated with
cognitive performance . for example , higher
depression scores have been correlated with lower
scores on the mini - mental state exam ( mmse ) [
8 , 9 ] as well as tests of memory and executive
functions ( e.g. , attention ) [ 1014 ] . likewise
, apathy and anhedonia in pd patients have been
associated with executive dysfunction [ 10 , 1523
] . however , few studies have specifically
investigated the relationship between anxiety and
cognition in pd . one study showed a strong
negative relationship between anxiety ( both state
and trait ) and overall cognitive performance (
measured by the total of the repeatable battery
for the assessment of neuropsychological status
index ) within a sample of 27 pd patients .
furthermore , trait anxiety was negatively
associated with each of the cognitive domains
assessed by the rbans ( i.e. , immediate memory ,
visuospatial construction , language , attention ,
and delayed memory ) . two further studies have
examined whether anxiety differentially affects
cognition in patients with left - sided dominant
pd ( lpd ) versus right - sided dominant pd ( rpd
) ; however , their findings were inconsistent .
the first study found that working memory
performance was worse in lpd patients with anxiety
compared to rpd patients with anxiety , whereas
the second study reported that , in lpd , apathy
but not anxiety was associated with performance on
nonverbally mediated executive functions and
visuospatial tasks ( e.g. , tmt - b , wms - iii
spatial span ) , while in rpd , anxiety but not
apathy significantly correlated with performance
on verbally mediated tasks ( e.g. , clock reading
test and boston naming test ) . furthermore ,
anxiety was significantly correlated with
neuropsychological measures of attention and
executive and visuospatial functions . taken
together , it is evident that there are limited
and inconsistent findings describing the
relationship between anxiety and cognition in pd
and more specifically how anxiety might influence
particular domains of cognition such as attention
and memory and executive functioning . it is also
striking that , to date , no study has examined
the influence of anxiety on cognition in pd by
directly comparing groups of pd patients with and
without anxiety while excluding depression . given
that research on healthy young adults suggests
that anxiety reduces processing capacity and
impairs processing efficiency , especially in the
central executive and attentional systems of
working memory [ 26 , 27 ] , we hypothesized that
pd patients with anxiety would show impairments in
attentional set - shifting and working memory
compared to pd patients without anxiety .
furthermore , since previous work , albeit limited
, has focused on the influence of symptom
laterality on anxiety and cognition , we also
explored this relationship . seventeen pd patients
with anxiety and thirty - three pd patients
without anxiety were included in this study ( see
table 1 ) . the cross - sectional data from these
participants was taken from a patient database
that has been compiled over the past 8 years (
since 2008 ) at the parkinson 's disease research
clinic at the brain and mind centre , university
of sydney . inclusion criteria involved a
diagnosis of idiopathic pd according to the united
kingdom parkinson 's disease society brain bank
criteria and were confirmed by a neurologist (
sjgl ) . patients also had to have an adequate
proficiency in english and have completed a full
neuropsychological assessment . ten patients in
this study ( 5 pd with anxiety ; 5 pd without
anxiety ) were taking psychotropic drugs ( i.e. ,
benzodiazepine or selective serotonin reuptake
inhibitor ) . patients were also excluded if they
had other neurological disorders , psychiatric
disorders other than affective disorders ( such as
anxiety ) , or if they reported a score greater
than six on the depression subscale of the
hospital anxiety and depression scale ( hads ) .
thus , all participants who scored within a
depressed ( hads - d > 6 ) range were excluded
from this study , in attempt to examine a refined
sample of pd patients with and without anxiety in
order to determine the independent effect of
anxiety on cognition . this research was approved
by the human research ethics committee of the
university of sydney , and written informed
consent was obtained from all participants . self
- reported hads was used to assess anxiety in pd
and has been previously shown to be a useful
measure of clinical anxiety in pd . a cut - off
score of > 8 on the anxiety subscale of the hads (
hads - a ) was used to identify pd cases with
anxiety ( pda+ ) , while a cut - off score of < 6
on the hads - a was used to identify pd cases
without anxiety ( pda ) . this criterion was more
stringent than usual ( > 7 cut - off score ) , in
effort to create distinct patient groups . the
neurological evaluation rated participants
according to hoehn and yahr ( h&y ) stages and
assessed their motor symptoms using part iii of
the revised mds task force unified parkinson 's
disease rating scale ( updrs ) . in a similar way
this was determined by calculating a total left
and right score from rigidity items 3035 ,
voluntary movement items 3643 , and tremor items
5057 from the mds - updrs part iii ( see table 1 )
. processing speed was assessed using the trail
making test , part a ( tmt - a , z - score ) .
attentional set - shifting was measured using the
trail making test , part b ( tmt - b , z - score )
. working memory was assessed using the digit span
forward and backward subtest of the wechsler
memory scale - iii ( raw scores ) . language was
assessed with semantic and phonemic verbal fluency
via the controlled oral word associated test (
cowat animals and letters , z - score ) . the
ability to retain learned verbal memory was
assessed using the logical memory subtest from the
wechsler memory scale - iii ( lm - i z - score ,
lm - ii z - score , % lm retention z - score ) .
the mini - mental state examination ( mmse )
demographic , clinical , and neuropsychological
variables were compared between the two groups
with the independent t - test or mann whitney u
test , depending on whether the variable met
parametric assumptions . chi - square tests were
used to examine gender and symptom laterality
differences between groups . all analyses employed
an alpha level of p < 0.05 and were two - tailed .
spearman correlations were performed separately in
each group to examine associations between anxiety
and/or depression ratings and cognitive functions
. as expected , the pda+ group reported
significant greater levels of anxiety on the hads
- a ( u = 0 , p < 0.001 ) and higher total score
on the hads ( u = 1 , p < 0.001 ) compared to the
pda group ( table 1 ) . groups were matched in age
( t(48 ) = 1.31 , p = 0.20 ) , disease duration (
u = 259 , p = 0.66 ) , updrs - iii score ( u =
250.5 , p = 0.65 ) , h&y ( u = 245 , p = 0.43 ) ,
ledd ( u = 159.5 , p = 0.80 ) , and depression (
hads - d ) ( u = 190.5 , p = 0.06 ) . additionally
, all groups were matched in the distribution of
gender ( = 0.098 , p = 0.75 ) and side - affected
( = 0.765 , p = 0.38 ) . there were no group
differences for tmt - a performance ( u = 256 , p
= 0.62 ) ( table 2 ) ; however , the pda+ group
had worse performance on the trail making test
part b ( t(46 ) = 2.03 , p = 0.048 ) compared to
the pda group ( figure 1 ) . the pda+ group also
demonstrated significantly worse performance on
the digit span forward subtest ( t(48 ) = 2.22 , p
= 0.031 ) and backward subtest ( u = 190.5 , p =
0.016 ) compared to the pda group ( figures 2(a )
and 2(b ) ) . neither semantic verbal fluency (
t(47 ) = 0.70 , p = 0.49 ) nor phonemic verbal
fluency ( t(47 ) = 0.39 , p = 0.70 ) differed
between groups . logical memory i immediate recall
test ( u = 176 , p = 0.059 ) showed a trend that
the pda+ group had worse new verbal learning and
immediate recall abilities than the pda group .
however , logical memory ii test performance ( u =
219 , p = 0.204 ) and logical memory % retention (
u = 242.5 , p = 0.434 ) did not differ between
groups . there were also no differences between
groups in global cognition ( mmse ) ( u = 222.5 ,
p = 0.23 ) . participants were split into lpd and
rpd , and then further group differences were
examined between pda+ and pda. importantly , the
groups remained matched in age , disease duration
, updrs - iii , dde , h&y stage , and depression
but remained significantly different on self -
reported anxiety . lpda+ demonstrated worse
performance on the digit span forward test ( t(19
) = 2.29 , p = 0.033 ) compared to lpda , whereas
rpda+ demonstrated worse performance on the digit
span backward test ( u = 36.5 , p = 0.006 ) , lm -
i immediate recall ( u = 37.5 , p = 0.008 ) , and
lm - ii ( u = 45.0 , p = 0.021 ) but not lm %
retention ( u = 75.5 , p = 0.39 ) compared to
rpda. this study is the first to directly compare
cognition between pd patients with and without
anxiety . the findings confirmed our hypothesis
that anxiety negatively influences attentional set
- shifting and working memory in pd . more
specifically , we found that pd patients with
anxiety were more impaired on the trail making
test part b which assessed attentional set -
shifting , on both digit span tests which assessed
working memory and attention , and to a lesser
extent on the logical memory test which assessed
memory and new verbal learning compared to pd
patients without anxiety . taken together , these
findings suggest that anxiety in pd may reduce
processing capacity and impair processing
efficiency , especially in the central executive
and attentional systems of working memory in a
similar way as seen in young healthy adults [ 26 ,
27 ] . although the neurobiology of anxiety in pd
remains unknown , many researchers have postulated
that anxiety disorders are related to
neurochemical changes that occur during the early
, premotor stages of pd - related degeneration [
37 , 38 ] such as nigrostriatal dopamine depletion
, as well as cell loss within serotonergic and
noradrenergic brainstem nuclei ( i.e. , raphe
nuclei and locus coeruleus , resp . , which
provide massive inputs to corticolimbic regions )
. over time , chronic dysregulation of
adrenocortical and catecholamine functions can
lead to hippocampal damage as well as
dysfunctional prefrontal neural circuitries [ 39 ,
40 ] , which play a key role in memory and
attention . recent functional neuroimaging work
has suggested that enhanced hippocampal activation
during executive functioning and working memory
tasks may represent compensatory processes for
impaired frontostriatal functions in pd patients
compared to controls . therefore , chronic stress
from anxiety , for example , may disrupt
compensatory processes in pd patients and explain
the cognitive impairments specifically in working
memory and attention seen in pd patients with
anxiety . it has also been suggested that
hyperactivation within the putamen may reflect a
compensatory striatal mechanism to maintain normal
working memory performance in pd patients ;
however , losing this compensatory activation has
been shown to contribute to poor working memory
performance . anxiety in mild pd has been linked
to reduced putamen dopamine uptake which becomes
more extensive as the disease progresses . this
further supports the notion that anxiety may
disrupt compensatory striatal mechanisms as well ,
providing another possible explanation for the
cognitive impairments observed in pd patients with
anxiety in this study . noradrenergic and
serotonergic systems should also be considered
when trying to explain the mechanisms by which
anxiety may influence cognition in pd . although
these neurotransmitter systems are relatively
understudied in pd cognition , treating the
noradrenergic and serotonergic systems has shown
beneficial effects on cognition in pd . selective
serotonin reuptake inhibitor , citalopram , was
shown to improve response inhibition deficits in
pd , while noradrenaline reuptake blocker ,
atomoxetine , has been recently reported to have
promising effects on cognition in pd [ 45 , 46 ] .
overall , very few neuroimaging studies have been
conducted in pd in order to understand the neural
correlates of pd anxiety and its underlying neural
pathology . future research should focus on
relating anatomical changes and neurochemical
changes to neural activation in order to gain a
clearer understanding on how these pathologies
affect anxiety in pd . to further understand how
anxiety and cognitive dysfunction are related ,
future research should focus on using advanced
structural and function imaging techniques to
explain both cognitive and neural breakdowns that
are associated with anxiety in pd patients .
research has indicated that those with amnestic
mild cognitive impairment who have more
neuropsychiatric symptoms have a greater risk of
developing dementia compared to those with fewer
neuropsychiatric symptoms . future studies should
also examine whether treating neuropsychiatric
symptoms might impact the progression of cognitive
decline and improve cognitive impairments in pd
patients . previous studies have used pd symptom
laterality as a window to infer asymmetrical
dysfunction of neural circuits . for example , lpd
patients have greater inferred right hemisphere
pathology , whereas rpd patients have greater
inferred left hemisphere pathology . thus ,
cognitive domains predominantly subserved by the
left hemisphere ( e.g. , verbally mediated tasks
of executive function and verbal memory ) might be
hypothesized to be more affected in rpd than lpd ;
however , this remains controversial . it has also
been suggested that since anxiety is a common
feature of left hemisphere involvement [ 48 , 49 ]
, cognitive domains subserved by the left
hemisphere may also be more strongly related to
anxiety . results from this study showed selective
verbal memory deficits in rpd patients with
anxiety compared to rpd without anxiety , whereas
lpd patients with anxiety had greater attentional
/ working memory deficits compared to lpd without
anxiety . although these results align with
previous research , interpretations of these
findings should be made with caution due to the
small sample size in the lpd comparison
specifically . recent work has suggested that the
hads questionnaire may underestimate the burden of
anxiety related symptomology and therefore be a
less sensitive measure of anxiety in pd [ 30 , 50
] . in addition , our small sample size also
limited the statistical power for detecting
significant findings . based on these limitations
, our findings are likely conservative and
underrepresent the true impact anxiety has on
cognition in pd . additionally , the current study
employed a very brief neuropsychological
assessment including one or two tests for each
cognitive domain . future studies are encouraged
to collect a more complex and comprehensive
battery from a larger sample of pd participants in
order to better understand the role anxiety plays
on cognition in pd . another limitation of this
study was the absence of diagnostic interviews to
characterize participants ' psychiatric symptoms
and specify the type of anxiety disorders included
in this study . future studies should perform
diagnostic interviews with participants ( e.g. ,
using dsm - v criteria ) rather than relying on
self - reported measures to group participants ,
in order to better understand whether the type of
anxiety disorder ( e.g. , social anxiety , phobias
, panic disorders , and generalized anxiety )
influences cognitive performance differently in pd
. one advantage the hads questionnaire provided
over other anxiety scales was that it assessed
both anxiety and depression simultaneously and
allowed us to control for coexisting depression .
although there was a trend that the pda+ group
self - reported higher levels of depression than
the pda group , all participants included in the
study scored < 6 on the depression subscale of the
hads . controlling for depression while assessing
anxiety has been identified as a key shortcoming
in the majority of recent work . considering many
previous studies have investigated the influence
of depression on cognition in pd without
accounting for the presence of anxiety and the
inconsistent findings reported to date , we
recommend that future research should try to
disentangle the influence of anxiety versus
depression on cognitive impairments in pd .
considering the growing number of clinical trials
for treating depression , there are few if any for
the treatment of anxiety in pd . anxiety is a key
contributor to decreased quality of life in pd and
greatly requires better treatment options .
moreover , anxiety has been suggested to play a
key role in freezing of gait ( fog ) , which is
also related to attentional set - shifting [ 52 ,
53 ] . future research should examine the link
between anxiety , set - shifting , and fog , in
order to determine whether treating anxiety might
be a potential therapy for improving fog ."""
import torch
from transformers import AutoTokenizer, LongT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps")
input_ids = tokenizer(LONG_ARTICLE, return_tensors="pt").input_ids.to("cuda")
model = LongT5ForConditionalGeneration.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps", return_dict_in_generate=True).to("cuda")
sequences = model.generate(input_ids).sequences
summary = tokenizer.batch_decode(sequences)
``` |
ruggedmug/q-Taxi-v3 | ruggedmug | 2023-07-13T19:38:06Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-13T19:38:03Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ruggedmug/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kfahn/ast-finetuned-audioset-v1 | kfahn | 2023-07-13T19:37:47Z | 160 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-07-13T18:51:04Z | ---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-MIT/ast-finetuned-audioset-v1
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.91
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-MIT/ast-finetuned-audioset-v1
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4979
- Accuracy: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8293 | 1.0 | 225 | 0.5020 | 0.85 |
| 1.4048 | 2.0 | 450 | 0.5068 | 0.84 |
| 0.8456 | 3.0 | 675 | 0.9684 | 0.82 |
| 0.004 | 4.0 | 900 | 0.5937 | 0.86 |
| 0.0981 | 5.0 | 1125 | 0.5353 | 0.87 |
| 0.0001 | 6.0 | 1350 | 0.6000 | 0.89 |
| 0.4493 | 7.0 | 1575 | 0.6168 | 0.9 |
| 0.0001 | 8.0 | 1800 | 0.5155 | 0.91 |
| 0.0001 | 9.0 | 2025 | 0.5043 | 0.91 |
| 0.0 | 10.0 | 2250 | 0.4979 | 0.91 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
surasuk/dolly-lora | surasuk | 2023-07-13T19:12:44Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-13T18:08:08Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
traberph/RedBERT | traberph | 2023-07-13T19:10:39Z | 107 | 3 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:webis/tldr-17",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-10T14:05:07Z | ---
license: apache-2.0
datasets:
- webis/tldr-17
language:
- en
library_name: transformers
pipeline_tag: text-classification
widget:
- text: "Biden says US is at tipping point on gun control: We will ban assault weapons in this country"
example_title: "classification"
---
# RedBERT - a Reddit post classifier
This model based on distilbert is finetuned to predict the subreddit of a Reddit post.
## Usage
### Preparations
The model uses the transformers library, so make sure to install it.
```
pip install transformers[torch]
```
After the installation, the model can be loaded from Hugging Face.
The model will be sored localy so if you run this lines multiple times the model will be loaded from cache.
```py
from transformers import pipeline
pipe = pipeline("text-classification", model="traberph/RedBERT")
```
### Basic
For a simple classification task just call the pipeline with the text of your choice
```py
text = "I (33f) need to explain to my coworker (30m) I don't want his company on the commute back home"
pipe(text)
```
output:
[{'label': 'relationships', 'score': 0.9622366428375244}]
### Multiclass with visualization
Everyone likes visualizations! Therefore this is an example to output the 5 most probable labels and visualize the result.
Make sure that all requirements are satisfied.
```
pip install pandas seaborn
```
```py
import pandas as pd
import seaborn as sns
# if the model is already loaded this can be skipped
from transformers import pipeline
pipe = pipeline("text-classification", model="traberph/RedBERT")
text = "Today I spilled coffee over my pc. It started to smoke and the screen turned black. I guess I have a problem now."
# predict the 5 most probable labels
res = pipe(text, top_k=5)
# create a pandas dataframe from the result
df = pd.DataFrame(res)
# use seaborn to create a barplot
sns.barplot(df, x='score', y='label', color='steelblue')
```
output:

## Training
The training of the final version of this model took `130h` on a single `Tesla P100 GPU`.
90% of the [webis/tldr-17](https://huggingface.co/datasets/webis/tldr-17/) where used for this version.
## Bias and Limitations
The webis/tldr-17 dataset used to train this model contains 3 848 330 posts from 29 651 subreddits.
Those posts however are not equally distributed over the subreddits. 589 947 posts belong to the subreddit `AskReddit`, which is `15%` of the whole dataset. Other subreddits are underrepresented.
| top subreddits | distribution |
| --- | --- |
|  |  |
This bias in the subreddit distribution is also represented in the model and can be observed during inference.
| class labels for `"Biden says US is at tipping point on gun control: We will ban assault weapons in this country"`, from r/politics |
| --- |
|  | |
vislangai/amc-albef | vislangai | 2023-07-13T18:47:49Z | 0 | 0 | null | [
"en",
"license:mit",
"region:us"
]
| null | 2023-07-13T18:45:55Z | ---
license: mit
language:
- en
--- |
grace-pro/afriberta-small-hausa-5e-5 | grace-pro | 2023-07-13T18:41:38Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-07-13T18:31:08Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-small-hausa-5e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-small-hausa-5e-5
This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1600
- Precision: 0.6808
- Recall: 0.4937
- F1: 0.5724
- Accuracy: 0.9623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1523 | 1.0 | 1312 | 0.1338 | 0.6526 | 0.4261 | 0.5156 | 0.9583 |
| 0.1162 | 2.0 | 2624 | 0.1300 | 0.6862 | 0.4603 | 0.5510 | 0.9614 |
| 0.089 | 3.0 | 3936 | 0.1375 | 0.6953 | 0.4630 | 0.5559 | 0.9619 |
| 0.0698 | 4.0 | 5248 | 0.1507 | 0.6860 | 0.4888 | 0.5708 | 0.9623 |
| 0.0559 | 5.0 | 6560 | 0.1600 | 0.6808 | 0.4937 | 0.5724 | 0.9623 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ericserafa/ppo-Huggy | ericserafa | 2023-07-13T18:38:06Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-07-13T17:36:51Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ericserafa/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Winmodel/LunarLander-v2 | Winmodel | 2023-07-13T18:37:07Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-13T18:34:13Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -145.05 +/- 69.75
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Winmodel/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
NTU-NLP-sg/xCodeEval-nl-code-starencoder-ckpt-37 | NTU-NLP-sg | 2023-07-13T18:35:21Z | 0 | 0 | null | [
"arxiv:2303.03004",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2023-07-13T06:59:15Z | ---
license: cc-by-nc-4.0
---
## Model Description
**StarEncoder** trained with training split of `retrieval_nl_code` subset of [xCodeEval](https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval). Trained for 37 epochs.
Code Repo used to train: https://github.com/facebookresearch/DPR
For details result, please follow our [paper](https://arxiv.org/abs/2303.03004). |
mayapapaya/Keyword-Extractor | mayapapaya | 2023-07-13T18:33:59Z | 204 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-13T14:23:08Z | # Model Card for Model ID
This model is meant to extract keywords from text.
- **Model type:** text-classification
- **Language(s) (NLP):** English
- **License:** cc
- **Finetuned from model [optional]:** [More Information Needed]
## Training Details
This model is a fine-tuned version of the [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model.
## Training Data
Trained on [51la5/keyword-extraction](https://huggingface.co/datasets/51la5/keyword-extraction) from HuggingFace Hub.
## How to Get Started with the Model
Note: model inputs were tokenized using distilbert-base-uncased tokenizer
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("mayapapaya/Keyword-Extractor")
```
|
chunwoolee0/my_doccls_korean_model | chunwoolee0 | 2023-07-13T18:27:18Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:nsmc",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-12T02:48:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- nsmc
metrics:
- accuracy
model-index:
- name: my_doccls_korean_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: nsmc
type: nsmc
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.90372
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_doccls_korean_model
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the nsmc dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2942
- Accuracy: 0.9037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.267 | 1.0 | 2344 | 0.2482 | 0.8987 |
| 0.1751 | 2.0 | 4688 | 0.2523 | 0.9024 |
| 0.1108 | 3.0 | 7032 | 0.2942 | 0.9037 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Knudo/ppo-Huggy | Knudo | 2023-07-13T18:23:56Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-07-13T18:23:52Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Knudo/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Sandrro/text_to_topic | Sandrro | 2023-07-13T18:15:06Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-13T17:18:08Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: text_to_subfunction_v10_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_to_subfunction_v10_2
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5115
- F1: 0.5638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.8616 | 1.0 | 5400 | 1.7457 | 0.4607 |
| 1.4576 | 2.0 | 10800 | 1.5115 | 0.5638 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.1.0.dev20230414+cu117
- Datasets 2.9.0
- Tokenizers 0.13.3
|
Tanor/BERTovoSENTNEG6 | Tanor | 2023-07-13T18:11:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-09T01:32:38Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: BERTovoSENTNEG6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTovoSENTNEG6
This model is a fine-tuned version of [Tanor/BERTicovoSENTNEG6](https://huggingface.co/Tanor/BERTicovoSENTNEG6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0837
- F1: 0.4878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 53 | 0.0536 | 0.0769 |
| No log | 2.0 | 106 | 0.0482 | 0.5909 |
| No log | 3.0 | 159 | 0.0610 | 0.5532 |
| No log | 4.0 | 212 | 0.0718 | 0.5 |
| No log | 5.0 | 265 | 0.0837 | 0.4878 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
xian79/a2c-PandaReachDense-v2 | xian79 | 2023-07-13T18:05:05Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-13T18:04:46Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.42 +/- 0.30
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
toshiko-shibano/open-calm-7b-qlora | toshiko-shibano | 2023-07-13T18:04:08Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-13T13:01:11Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
grace-pro/afriberta-large-hausa-5e-5 | grace-pro | 2023-07-13T18:00:19Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-07-13T17:39:12Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-large-hausa-5e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-large-hausa-5e-5
This model is a fine-tuned version of [castorini/afriberta_large](https://huggingface.co/castorini/afriberta_large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1680
- Precision: 0.7001
- Recall: 0.5395
- F1: 0.6094
- Accuracy: 0.9652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1427 | 1.0 | 1312 | 0.1258 | 0.6868 | 0.4660 | 0.5553 | 0.9615 |
| 0.1042 | 2.0 | 2624 | 0.1183 | 0.6965 | 0.5150 | 0.5921 | 0.9639 |
| 0.0719 | 3.0 | 3936 | 0.1317 | 0.6943 | 0.5336 | 0.6034 | 0.9646 |
| 0.048 | 4.0 | 5248 | 0.1490 | 0.7099 | 0.5229 | 0.6022 | 0.9650 |
| 0.0341 | 5.0 | 6560 | 0.1680 | 0.7001 | 0.5395 | 0.6094 | 0.9652 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Dlychan/Toketenk | Dlychan | 2023-07-13T17:54:53Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-07-13T17:46:45Z | ---
license: creativeml-openrail-m
---
|
ayanban011/6_e_200-tiny_tobacco3482_kd_CEKD_t2.5_a0.9 | ayanban011 | 2023-07-13T17:53:00Z | 165 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-07-13T15:36:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 6_e_200-tiny_tobacco3482_kd_CEKD_t2.5_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6_e_200-tiny_tobacco3482_kd_CEKD_t2.5_a0.9
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5528
- Accuracy: 0.84
- Brier Loss: 0.2493
- Nll: 1.6062
- F1 Micro: 0.8400
- F1 Macro: 0.8256
- Ece: 0.1626
- Aurc: 0.0556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 1.9962 | 0.24 | 0.8634 | 4.6099 | 0.24 | 0.1987 | 0.2924 | 0.7628 |
| No log | 2.0 | 50 | 1.2785 | 0.545 | 0.5960 | 2.6707 | 0.545 | 0.4456 | 0.2808 | 0.2569 |
| No log | 3.0 | 75 | 0.9740 | 0.685 | 0.4455 | 2.0688 | 0.685 | 0.5937 | 0.2231 | 0.1314 |
| No log | 4.0 | 100 | 0.8052 | 0.75 | 0.3628 | 1.5271 | 0.75 | 0.7144 | 0.2031 | 0.0915 |
| No log | 5.0 | 125 | 0.7531 | 0.77 | 0.3324 | 1.5448 | 0.7700 | 0.7348 | 0.1868 | 0.0829 |
| No log | 6.0 | 150 | 0.9730 | 0.735 | 0.4050 | 1.5875 | 0.735 | 0.7229 | 0.1899 | 0.1118 |
| No log | 7.0 | 175 | 0.6437 | 0.83 | 0.2790 | 1.3915 | 0.83 | 0.7996 | 0.1967 | 0.0612 |
| No log | 8.0 | 200 | 0.6670 | 0.78 | 0.2984 | 1.2128 | 0.78 | 0.7429 | 0.1701 | 0.0716 |
| No log | 9.0 | 225 | 0.6012 | 0.845 | 0.2521 | 1.4007 | 0.845 | 0.8208 | 0.1573 | 0.0581 |
| No log | 10.0 | 250 | 0.6754 | 0.795 | 0.3063 | 1.4194 | 0.795 | 0.7638 | 0.2036 | 0.0759 |
| No log | 11.0 | 275 | 0.5950 | 0.83 | 0.2554 | 1.1558 | 0.83 | 0.8053 | 0.1877 | 0.0529 |
| No log | 12.0 | 300 | 0.7061 | 0.79 | 0.3153 | 1.6781 | 0.79 | 0.7676 | 0.1879 | 0.0848 |
| No log | 13.0 | 325 | 0.6349 | 0.805 | 0.2806 | 1.3102 | 0.805 | 0.7767 | 0.1523 | 0.0667 |
| No log | 14.0 | 350 | 0.5973 | 0.82 | 0.2677 | 1.5498 | 0.82 | 0.8020 | 0.1734 | 0.0567 |
| No log | 15.0 | 375 | 0.6186 | 0.825 | 0.2792 | 1.3816 | 0.825 | 0.8170 | 0.1558 | 0.0672 |
| No log | 16.0 | 400 | 0.5694 | 0.815 | 0.2662 | 1.1759 | 0.815 | 0.7962 | 0.1675 | 0.0559 |
| No log | 17.0 | 425 | 0.5993 | 0.825 | 0.2793 | 1.2547 | 0.825 | 0.8112 | 0.1822 | 0.0647 |
| No log | 18.0 | 450 | 0.6333 | 0.815 | 0.2844 | 1.6540 | 0.815 | 0.8024 | 0.1562 | 0.0622 |
| No log | 19.0 | 475 | 0.5208 | 0.845 | 0.2349 | 1.2879 | 0.845 | 0.8155 | 0.1553 | 0.0494 |
| 0.4544 | 20.0 | 500 | 0.5412 | 0.86 | 0.2438 | 1.6726 | 0.8600 | 0.8465 | 0.1531 | 0.0485 |
| 0.4544 | 21.0 | 525 | 0.6171 | 0.825 | 0.2775 | 1.9997 | 0.825 | 0.8183 | 0.1464 | 0.0549 |
| 0.4544 | 22.0 | 550 | 0.5479 | 0.84 | 0.2447 | 1.5015 | 0.8400 | 0.8263 | 0.1481 | 0.0680 |
| 0.4544 | 23.0 | 575 | 0.5508 | 0.835 | 0.2491 | 1.8095 | 0.835 | 0.8209 | 0.1616 | 0.0469 |
| 0.4544 | 24.0 | 600 | 0.5597 | 0.825 | 0.2577 | 1.6676 | 0.825 | 0.8077 | 0.1572 | 0.0486 |
| 0.4544 | 25.0 | 625 | 0.5505 | 0.835 | 0.2535 | 1.6085 | 0.835 | 0.8166 | 0.1664 | 0.0524 |
| 0.4544 | 26.0 | 650 | 0.5347 | 0.84 | 0.2442 | 1.4694 | 0.8400 | 0.8288 | 0.1825 | 0.0505 |
| 0.4544 | 27.0 | 675 | 0.5333 | 0.84 | 0.2418 | 1.5809 | 0.8400 | 0.8280 | 0.1634 | 0.0521 |
| 0.4544 | 28.0 | 700 | 0.5417 | 0.84 | 0.2471 | 1.5289 | 0.8400 | 0.8231 | 0.1500 | 0.0503 |
| 0.4544 | 29.0 | 725 | 0.5369 | 0.845 | 0.2434 | 1.5333 | 0.845 | 0.8318 | 0.1690 | 0.0523 |
| 0.4544 | 30.0 | 750 | 0.5396 | 0.84 | 0.2448 | 1.5269 | 0.8400 | 0.8260 | 0.1689 | 0.0534 |
| 0.4544 | 31.0 | 775 | 0.5411 | 0.845 | 0.2459 | 1.5325 | 0.845 | 0.8289 | 0.1524 | 0.0514 |
| 0.4544 | 32.0 | 800 | 0.5429 | 0.845 | 0.2456 | 1.5239 | 0.845 | 0.8318 | 0.1550 | 0.0527 |
| 0.4544 | 33.0 | 825 | 0.5445 | 0.84 | 0.2468 | 1.5275 | 0.8400 | 0.8231 | 0.1626 | 0.0535 |
| 0.4544 | 34.0 | 850 | 0.5432 | 0.845 | 0.2461 | 1.5210 | 0.845 | 0.8289 | 0.1557 | 0.0533 |
| 0.4544 | 35.0 | 875 | 0.5438 | 0.845 | 0.2459 | 1.5269 | 0.845 | 0.8318 | 0.1564 | 0.0533 |
| 0.4544 | 36.0 | 900 | 0.5451 | 0.845 | 0.2466 | 1.5262 | 0.845 | 0.8289 | 0.1610 | 0.0541 |
| 0.4544 | 37.0 | 925 | 0.5415 | 0.85 | 0.2448 | 1.5254 | 0.85 | 0.8348 | 0.1667 | 0.0528 |
| 0.4544 | 38.0 | 950 | 0.5447 | 0.845 | 0.2461 | 1.5367 | 0.845 | 0.8318 | 0.1519 | 0.0535 |
| 0.4544 | 39.0 | 975 | 0.5437 | 0.85 | 0.2454 | 1.5223 | 0.85 | 0.8348 | 0.1605 | 0.0536 |
| 0.0607 | 40.0 | 1000 | 0.5445 | 0.845 | 0.2460 | 1.5252 | 0.845 | 0.8318 | 0.1610 | 0.0539 |
| 0.0607 | 41.0 | 1025 | 0.5460 | 0.845 | 0.2465 | 1.5925 | 0.845 | 0.8318 | 0.1416 | 0.0541 |
| 0.0607 | 42.0 | 1050 | 0.5466 | 0.84 | 0.2467 | 1.5304 | 0.8400 | 0.8260 | 0.1555 | 0.0542 |
| 0.0607 | 43.0 | 1075 | 0.5458 | 0.84 | 0.2464 | 1.5272 | 0.8400 | 0.8231 | 0.1633 | 0.0539 |
| 0.0607 | 44.0 | 1100 | 0.5460 | 0.85 | 0.2464 | 1.5459 | 0.85 | 0.8377 | 0.1534 | 0.0550 |
| 0.0607 | 45.0 | 1125 | 0.5464 | 0.85 | 0.2465 | 1.5390 | 0.85 | 0.8377 | 0.1471 | 0.0544 |
| 0.0607 | 46.0 | 1150 | 0.5462 | 0.85 | 0.2465 | 1.5972 | 0.85 | 0.8377 | 0.1549 | 0.0540 |
| 0.0607 | 47.0 | 1175 | 0.5475 | 0.85 | 0.2472 | 1.5910 | 0.85 | 0.8377 | 0.1592 | 0.0546 |
| 0.0607 | 48.0 | 1200 | 0.5482 | 0.845 | 0.2475 | 1.5943 | 0.845 | 0.8294 | 0.1548 | 0.0545 |
| 0.0607 | 49.0 | 1225 | 0.5475 | 0.845 | 0.2471 | 1.5922 | 0.845 | 0.8294 | 0.1534 | 0.0545 |
| 0.0607 | 50.0 | 1250 | 0.5476 | 0.85 | 0.2470 | 1.5908 | 0.85 | 0.8377 | 0.1539 | 0.0545 |
| 0.0607 | 51.0 | 1275 | 0.5480 | 0.845 | 0.2471 | 1.5990 | 0.845 | 0.8322 | 0.1545 | 0.0547 |
| 0.0607 | 52.0 | 1300 | 0.5479 | 0.85 | 0.2469 | 1.5917 | 0.85 | 0.8348 | 0.1688 | 0.0547 |
| 0.0607 | 53.0 | 1325 | 0.5479 | 0.845 | 0.2472 | 1.6052 | 0.845 | 0.8322 | 0.1545 | 0.0543 |
| 0.0607 | 54.0 | 1350 | 0.5490 | 0.85 | 0.2477 | 1.5948 | 0.85 | 0.8348 | 0.1610 | 0.0545 |
| 0.0607 | 55.0 | 1375 | 0.5489 | 0.85 | 0.2474 | 1.5967 | 0.85 | 0.8377 | 0.1543 | 0.0560 |
| 0.0607 | 56.0 | 1400 | 0.5499 | 0.845 | 0.2480 | 1.5939 | 0.845 | 0.8294 | 0.1561 | 0.0549 |
| 0.0607 | 57.0 | 1425 | 0.5492 | 0.845 | 0.2476 | 1.6048 | 0.845 | 0.8322 | 0.1570 | 0.0549 |
| 0.0607 | 58.0 | 1450 | 0.5497 | 0.845 | 0.2478 | 1.6004 | 0.845 | 0.8322 | 0.1724 | 0.0548 |
| 0.0607 | 59.0 | 1475 | 0.5496 | 0.85 | 0.2477 | 1.5982 | 0.85 | 0.8377 | 0.1634 | 0.0546 |
| 0.0589 | 60.0 | 1500 | 0.5497 | 0.845 | 0.2478 | 1.5969 | 0.845 | 0.8322 | 0.1592 | 0.0545 |
| 0.0589 | 61.0 | 1525 | 0.5492 | 0.85 | 0.2476 | 1.6095 | 0.85 | 0.8377 | 0.1630 | 0.0547 |
| 0.0589 | 62.0 | 1550 | 0.5507 | 0.845 | 0.2483 | 1.6060 | 0.845 | 0.8322 | 0.1649 | 0.0554 |
| 0.0589 | 63.0 | 1575 | 0.5490 | 0.845 | 0.2474 | 1.6021 | 0.845 | 0.8322 | 0.1635 | 0.0546 |
| 0.0589 | 64.0 | 1600 | 0.5508 | 0.845 | 0.2483 | 1.5970 | 0.845 | 0.8294 | 0.1697 | 0.0552 |
| 0.0589 | 65.0 | 1625 | 0.5505 | 0.84 | 0.2483 | 1.6023 | 0.8400 | 0.8256 | 0.1658 | 0.0553 |
| 0.0589 | 66.0 | 1650 | 0.5503 | 0.845 | 0.2481 | 1.6032 | 0.845 | 0.8322 | 0.1637 | 0.0546 |
| 0.0589 | 67.0 | 1675 | 0.5514 | 0.84 | 0.2486 | 1.6000 | 0.8400 | 0.8227 | 0.1649 | 0.0559 |
| 0.0589 | 68.0 | 1700 | 0.5516 | 0.84 | 0.2487 | 1.5979 | 0.8400 | 0.8227 | 0.1649 | 0.0550 |
| 0.0589 | 69.0 | 1725 | 0.5510 | 0.84 | 0.2485 | 1.6005 | 0.8400 | 0.8256 | 0.1639 | 0.0548 |
| 0.0589 | 70.0 | 1750 | 0.5510 | 0.84 | 0.2484 | 1.5990 | 0.8400 | 0.8256 | 0.1653 | 0.0549 |
| 0.0589 | 71.0 | 1775 | 0.5517 | 0.84 | 0.2487 | 1.6080 | 0.8400 | 0.8256 | 0.1640 | 0.0558 |
| 0.0589 | 72.0 | 1800 | 0.5525 | 0.84 | 0.2491 | 1.6069 | 0.8400 | 0.8227 | 0.1669 | 0.0558 |
| 0.0589 | 73.0 | 1825 | 0.5519 | 0.84 | 0.2488 | 1.6147 | 0.8400 | 0.8256 | 0.1638 | 0.0554 |
| 0.0589 | 74.0 | 1850 | 0.5519 | 0.84 | 0.2487 | 1.6027 | 0.8400 | 0.8256 | 0.1657 | 0.0558 |
| 0.0589 | 75.0 | 1875 | 0.5522 | 0.84 | 0.2490 | 1.6082 | 0.8400 | 0.8256 | 0.1717 | 0.0556 |
| 0.0589 | 76.0 | 1900 | 0.5523 | 0.84 | 0.2489 | 1.6022 | 0.8400 | 0.8256 | 0.1645 | 0.0553 |
| 0.0589 | 77.0 | 1925 | 0.5514 | 0.84 | 0.2486 | 1.6027 | 0.8400 | 0.8256 | 0.1635 | 0.0551 |
| 0.0589 | 78.0 | 1950 | 0.5518 | 0.84 | 0.2488 | 1.6007 | 0.8400 | 0.8256 | 0.1641 | 0.0556 |
| 0.0589 | 79.0 | 1975 | 0.5522 | 0.84 | 0.2490 | 1.6057 | 0.8400 | 0.8256 | 0.1637 | 0.0556 |
| 0.0588 | 80.0 | 2000 | 0.5520 | 0.84 | 0.2489 | 1.6110 | 0.8400 | 0.8256 | 0.1658 | 0.0552 |
| 0.0588 | 81.0 | 2025 | 0.5521 | 0.84 | 0.2489 | 1.6047 | 0.8400 | 0.8256 | 0.1659 | 0.0555 |
| 0.0588 | 82.0 | 2050 | 0.5521 | 0.84 | 0.2490 | 1.6015 | 0.8400 | 0.8256 | 0.1635 | 0.0551 |
| 0.0588 | 83.0 | 2075 | 0.5521 | 0.84 | 0.2489 | 1.6115 | 0.8400 | 0.8256 | 0.1637 | 0.0553 |
| 0.0588 | 84.0 | 2100 | 0.5523 | 0.84 | 0.2490 | 1.6033 | 0.8400 | 0.8256 | 0.1738 | 0.0553 |
| 0.0588 | 85.0 | 2125 | 0.5525 | 0.84 | 0.2491 | 1.6072 | 0.8400 | 0.8256 | 0.1658 | 0.0555 |
| 0.0588 | 86.0 | 2150 | 0.5521 | 0.84 | 0.2489 | 1.6057 | 0.8400 | 0.8256 | 0.1574 | 0.0553 |
| 0.0588 | 87.0 | 2175 | 0.5527 | 0.84 | 0.2492 | 1.6605 | 0.8400 | 0.8256 | 0.1610 | 0.0555 |
| 0.0588 | 88.0 | 2200 | 0.5526 | 0.84 | 0.2491 | 1.6056 | 0.8400 | 0.8256 | 0.1544 | 0.0556 |
| 0.0588 | 89.0 | 2225 | 0.5527 | 0.84 | 0.2492 | 1.6126 | 0.8400 | 0.8256 | 0.1547 | 0.0556 |
| 0.0588 | 90.0 | 2250 | 0.5525 | 0.84 | 0.2491 | 1.6059 | 0.8400 | 0.8256 | 0.1525 | 0.0556 |
| 0.0588 | 91.0 | 2275 | 0.5528 | 0.84 | 0.2492 | 1.6060 | 0.8400 | 0.8256 | 0.1604 | 0.0556 |
| 0.0588 | 92.0 | 2300 | 0.5526 | 0.84 | 0.2491 | 1.6080 | 0.8400 | 0.8256 | 0.1525 | 0.0555 |
| 0.0588 | 93.0 | 2325 | 0.5527 | 0.84 | 0.2492 | 1.6034 | 0.8400 | 0.8256 | 0.1547 | 0.0556 |
| 0.0588 | 94.0 | 2350 | 0.5526 | 0.84 | 0.2492 | 1.6040 | 0.8400 | 0.8256 | 0.1673 | 0.0555 |
| 0.0588 | 95.0 | 2375 | 0.5529 | 0.84 | 0.2493 | 1.6053 | 0.8400 | 0.8256 | 0.1545 | 0.0556 |
| 0.0588 | 96.0 | 2400 | 0.5526 | 0.84 | 0.2492 | 1.6050 | 0.8400 | 0.8256 | 0.1626 | 0.0555 |
| 0.0588 | 97.0 | 2425 | 0.5528 | 0.84 | 0.2492 | 1.6040 | 0.8400 | 0.8256 | 0.1686 | 0.0557 |
| 0.0588 | 98.0 | 2450 | 0.5528 | 0.84 | 0.2492 | 1.6068 | 0.8400 | 0.8256 | 0.1626 | 0.0555 |
| 0.0588 | 99.0 | 2475 | 0.5528 | 0.84 | 0.2492 | 1.6065 | 0.8400 | 0.8256 | 0.1626 | 0.0556 |
| 0.0588 | 100.0 | 2500 | 0.5528 | 0.84 | 0.2493 | 1.6062 | 0.8400 | 0.8256 | 0.1626 | 0.0556 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MBMMurad/BanglaBERT_Person_Name_Extractor | MBMMurad | 2023-07-13T17:52:09Z | 105 | 1 | transformers | [
"transformers",
"pytorch",
"electra",
"token-classification",
"bn",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-07-12T21:19:24Z | ---
language:
- bn
metrics:
- f1
pipeline_tag: token-classification
---
# Bangla-Person-Name-Extractor
This repository contains the implementation of a Bangla Person Name Extractor model which is able to extract Person name entities from a given sentence. We approached it as a token classification task i.e. tagging each token with either a Person's name or not. We leveraged the [BanglaBERT](http://https://github.com/csebuetnlp/banglabert) model for our task, finetuning it for a binary classification task using a custom-prepare dataset. We have deployed the model into huggingface for easier access and use case.
# How to use it?
[This Notebook](https://github.com/MBMMurad/Bangla-Person-Name-Extractor/blob/main/Inference_template.ipynb) contains the required Inference Template on a sentence.
<br></br>
You can also directly infer using the following code snippet. Just change the sentence.
```
from transformers import AutoModelForPreTraining, AutoTokenizer,AutoModelForTokenClassification #!pip install transformers==4.30.2
from normalizer import normalize #pip install git+https://github.com/csebuetnlp/normalizer
import torch #pip install torch
import numpy as np #!pip install numpy==1.23.5
model = AutoModelForTokenClassification.from_pretrained("MBMMurad/BanglaBERT_Person_Name_Extractor")
tokenizer = AutoTokenizer.from_pretrained("MBMMurad/BanglaBERT_Person_Name_Extractor")
def inference_fn(sentence):
sentence = normalize(sentence)
tokens = tokenizer.tokenize(sentence)
inputs = tokenizer.encode(sentence,return_tensors="pt")
outputs = model(inputs).logits
predictions = torch.argmax(outputs[0],axis=1)[1:-1].numpy()
idxs = np.where(predictions==1)
return np.array(tokens)[idxs]
sentence = "আব্দুর রহিম নামের কাস্টমারকে একশ টাকা বাকি দিলাম।"
pred = inference_fn(sentence)
print(f"Input Sentence : {sentence}")
print(f"Person Name Entities : {pred}")
sentence = "ইঞ্জিনিয়ার্স ইনস্টিটিউশন চট্টগ্রামের সাবেক সভাপতি প্রকৌশলী দেলোয়ার হোসেন মজুমদার প্রথম আলোকে বলেন, 'সংকট নিরসনে বর্তমান খালগুলোকে পূর্ণ প্রবাহে ফিরিয়ে আনার পাশাপাশি নতুন তিনটি খাল খনন জরুরি।'"
pred = inference_fn(sentence)
print(f"Input Sentence : {sentence}")
print(f"Person Name Entities : {pred}")
sentence = "দলীয় নেতারা তাঁর বাসভবনে যেতে চাইলে আটক হন।"
pred = inference_fn(sentence)
print(f"Input Sentence : {sentence}")
print(f"Person Name Entities : {pred}")
```
**Output:**
```
Input Sentence : আব্দুর রহিম নামের কাস্টমারকে একশ টাকা বাকি দিলাম।
Person Name Entities : ['আব্দুর' 'রহিম']
Input Sentence : ইঞ্জিনিয়ার্স ইনস্টিটিউশন চট্টগ্রামের সাবেক সভাপতি প্রকৌশলী দেলোয়ার হোসেন মজুমদার প্রথম আলোকে বলেন, 'সংকট নিরসনে বর্তমান খালগুলোকে পূর্ণ প্রবাহে ফিরিয়ে আনার পাশাপাশি নতুন তিনটি খাল খনন জরুরি।'
Person Name Entities : ['দেলোয়ার' 'হোসেন' 'মজুমদার']
Input Sentence : দলীয় নেতারা তাঁর বাসভবনে যেতে চাইলে আটক হন।
Person Name Entities : []
```
# Datasets
We used two datasets to train and evaluate our pipeline.
1. [Bengali-NER/annotated data at master · Rifat1493/Bengali-NER](http://https://github.com/Rifat1493/Bengali-NER/tree/master/annotated%20data)
2. [banglakit/bengali-ner-data](http://https://raw.githubusercontent.com/banglakit/bengali-ner-data/master/main.jsonl)
The annotation formats for both datasets were quite different, so we had to preprocess both of them before merging them. Please refer to [this notebook](https://github.com/MBMMurad/Bangla-Person-Name-Extractor/blob/main/prepare-dataset.ipynb) for preparing the dataset as required.
# Training and Evaluation
We treated this problem as a token classification task.So it seemed perfect to finetune the BanglaBERT model for our purpose. [BanglaBERT ](https://huggingface.co/csebuetnlp/banglabert)is an [ELECTRA](https://openreview.net/pdf?id=r1xMH1BtvB) discriminator model pretrained with the Replaced Token Detection (RTD) objective. Finetuned models using this checkpoint achieve state-of-the-art results on many of the NLP tasks in bengali.
We mainly finetuned two checkpoints of BanglaBERT.
1. [BanglaBERT](https://huggingface.co/csebuetnlp/banglabert)
2. [BanglaEERT small](https://huggingface.co/csebuetnlp/banglabert_small)
BanglaBERT performed better than BanglaBERT small ( 83% F1 score vs 79% F1 score on the test set) .
Please refer to [this notebook](https://github.com/MBMMurad/Bangla-Person-Name-Extractor/blob/main/Training%20Notebook%20%3A%20Person%20Name%20Extractor%20using%20BanglaBERT.ipynb) to see the training process.
**Quantitative results**
Please refer to [this notebook](https://github.com/MBMMurad/Bangla-Person-Name-Extractor/blob/main/Inference%20and%20Evaluation%20Notebook.ipynb) to see the evaluation process.
<br></br>

|
ayanban011/6_e_200-tiny_tobacco3482_kd_CEKD_t2.5_a0.5 | ayanban011 | 2023-07-13T17:45:53Z | 167 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-07-13T15:29:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 6_e_200-tiny_tobacco3482_kd_CEKD_t2.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6_e_200-tiny_tobacco3482_kd_CEKD_t2.5_a0.5
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4137
- Accuracy: 0.83
- Brier Loss: 0.2631
- Nll: 1.5189
- F1 Micro: 0.83
- F1 Macro: 0.8172
- Ece: 0.2007
- Aurc: 0.0591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 1.6265 | 0.23 | 0.8647 | 5.1432 | 0.23 | 0.1847 | 0.2751 | 0.7516 |
| No log | 2.0 | 50 | 1.0240 | 0.505 | 0.6074 | 2.7425 | 0.505 | 0.3980 | 0.2977 | 0.2705 |
| No log | 3.0 | 75 | 0.8130 | 0.655 | 0.4809 | 2.4939 | 0.655 | 0.5584 | 0.2576 | 0.1502 |
| No log | 4.0 | 100 | 0.6703 | 0.735 | 0.3867 | 1.3509 | 0.735 | 0.6895 | 0.2334 | 0.1109 |
| No log | 5.0 | 125 | 0.6313 | 0.755 | 0.3420 | 1.2521 | 0.755 | 0.7207 | 0.2081 | 0.0789 |
| No log | 6.0 | 150 | 0.6598 | 0.76 | 0.3543 | 1.4171 | 0.76 | 0.7103 | 0.2275 | 0.0886 |
| No log | 7.0 | 175 | 0.5669 | 0.77 | 0.3368 | 1.5060 | 0.7700 | 0.7351 | 0.2247 | 0.0941 |
| No log | 8.0 | 200 | 0.5486 | 0.775 | 0.3004 | 1.1511 | 0.775 | 0.7413 | 0.2252 | 0.0640 |
| No log | 9.0 | 225 | 0.5456 | 0.795 | 0.3141 | 1.4663 | 0.795 | 0.7762 | 0.2198 | 0.0889 |
| No log | 10.0 | 250 | 0.4954 | 0.82 | 0.2819 | 1.4644 | 0.82 | 0.7981 | 0.2150 | 0.0649 |
| No log | 11.0 | 275 | 0.4804 | 0.805 | 0.2866 | 1.3705 | 0.805 | 0.7927 | 0.2078 | 0.0658 |
| No log | 12.0 | 300 | 0.5234 | 0.785 | 0.3152 | 1.5290 | 0.785 | 0.7681 | 0.2149 | 0.0637 |
| No log | 13.0 | 325 | 0.4701 | 0.815 | 0.2839 | 1.4490 | 0.815 | 0.8010 | 0.2315 | 0.0586 |
| No log | 14.0 | 350 | 0.4859 | 0.795 | 0.2807 | 1.1224 | 0.795 | 0.7957 | 0.2170 | 0.0512 |
| No log | 15.0 | 375 | 0.5580 | 0.79 | 0.3272 | 1.7539 | 0.79 | 0.7735 | 0.2376 | 0.0708 |
| No log | 16.0 | 400 | 0.4918 | 0.8 | 0.2961 | 1.5112 | 0.8000 | 0.7988 | 0.1850 | 0.0568 |
| No log | 17.0 | 425 | 0.4442 | 0.8 | 0.2846 | 1.6182 | 0.8000 | 0.7767 | 0.2083 | 0.0712 |
| No log | 18.0 | 450 | 0.4460 | 0.82 | 0.2760 | 1.6839 | 0.82 | 0.8027 | 0.2127 | 0.0523 |
| No log | 19.0 | 475 | 0.4423 | 0.825 | 0.2676 | 1.3774 | 0.825 | 0.8176 | 0.1853 | 0.0557 |
| 0.4472 | 20.0 | 500 | 0.4998 | 0.81 | 0.2910 | 1.7711 | 0.81 | 0.8152 | 0.2181 | 0.0635 |
| 0.4472 | 21.0 | 525 | 0.4579 | 0.83 | 0.2871 | 1.7025 | 0.83 | 0.8135 | 0.1927 | 0.0696 |
| 0.4472 | 22.0 | 550 | 0.4421 | 0.825 | 0.2683 | 1.6453 | 0.825 | 0.8215 | 0.1929 | 0.0613 |
| 0.4472 | 23.0 | 575 | 0.4368 | 0.8 | 0.2821 | 1.7298 | 0.8000 | 0.7684 | 0.2060 | 0.0771 |
| 0.4472 | 24.0 | 600 | 0.4310 | 0.83 | 0.2689 | 1.4699 | 0.83 | 0.8163 | 0.2067 | 0.0556 |
| 0.4472 | 25.0 | 625 | 0.4394 | 0.83 | 0.2751 | 1.5955 | 0.83 | 0.8166 | 0.2138 | 0.0681 |
| 0.4472 | 26.0 | 650 | 0.4395 | 0.815 | 0.2786 | 1.6788 | 0.815 | 0.8033 | 0.2034 | 0.0643 |
| 0.4472 | 27.0 | 675 | 0.4118 | 0.84 | 0.2578 | 1.5641 | 0.8400 | 0.8293 | 0.2024 | 0.0554 |
| 0.4472 | 28.0 | 700 | 0.4273 | 0.82 | 0.2707 | 1.7118 | 0.82 | 0.8090 | 0.2133 | 0.0674 |
| 0.4472 | 29.0 | 725 | 0.4207 | 0.835 | 0.2648 | 1.6469 | 0.835 | 0.8206 | 0.1948 | 0.0652 |
| 0.4472 | 30.0 | 750 | 0.4172 | 0.825 | 0.2620 | 1.5024 | 0.825 | 0.8114 | 0.1833 | 0.0601 |
| 0.4472 | 31.0 | 775 | 0.4148 | 0.825 | 0.2610 | 1.4994 | 0.825 | 0.8070 | 0.2052 | 0.0593 |
| 0.4472 | 32.0 | 800 | 0.4148 | 0.825 | 0.2627 | 1.6293 | 0.825 | 0.8088 | 0.2080 | 0.0618 |
| 0.4472 | 33.0 | 825 | 0.4159 | 0.825 | 0.2625 | 1.5069 | 0.825 | 0.8135 | 0.2082 | 0.0604 |
| 0.4472 | 34.0 | 850 | 0.4168 | 0.825 | 0.2638 | 1.5770 | 0.825 | 0.8137 | 0.1888 | 0.0588 |
| 0.4472 | 35.0 | 875 | 0.4181 | 0.82 | 0.2640 | 1.5404 | 0.82 | 0.8043 | 0.2145 | 0.0582 |
| 0.4472 | 36.0 | 900 | 0.4154 | 0.83 | 0.2618 | 1.5719 | 0.83 | 0.8165 | 0.1965 | 0.0586 |
| 0.4472 | 37.0 | 925 | 0.4160 | 0.825 | 0.2632 | 1.5840 | 0.825 | 0.8137 | 0.2003 | 0.0604 |
| 0.4472 | 38.0 | 950 | 0.4133 | 0.83 | 0.2616 | 1.5711 | 0.83 | 0.8163 | 0.2040 | 0.0596 |
| 0.4472 | 39.0 | 975 | 0.4167 | 0.825 | 0.2635 | 1.5210 | 0.825 | 0.8138 | 0.1930 | 0.0590 |
| 0.0652 | 40.0 | 1000 | 0.4162 | 0.83 | 0.2630 | 1.6312 | 0.83 | 0.8163 | 0.1973 | 0.0593 |
| 0.0652 | 41.0 | 1025 | 0.4144 | 0.83 | 0.2626 | 1.5787 | 0.83 | 0.8163 | 0.2068 | 0.0603 |
| 0.0652 | 42.0 | 1050 | 0.4150 | 0.83 | 0.2631 | 1.5789 | 0.83 | 0.8163 | 0.1970 | 0.0588 |
| 0.0652 | 43.0 | 1075 | 0.4158 | 0.825 | 0.2635 | 1.5833 | 0.825 | 0.8138 | 0.1927 | 0.0597 |
| 0.0652 | 44.0 | 1100 | 0.4132 | 0.83 | 0.2622 | 1.5130 | 0.83 | 0.8163 | 0.2030 | 0.0593 |
| 0.0652 | 45.0 | 1125 | 0.4146 | 0.83 | 0.2630 | 1.6312 | 0.83 | 0.8165 | 0.2010 | 0.0587 |
| 0.0652 | 46.0 | 1150 | 0.4138 | 0.825 | 0.2624 | 1.5301 | 0.825 | 0.8135 | 0.2065 | 0.0587 |
| 0.0652 | 47.0 | 1175 | 0.4142 | 0.83 | 0.2627 | 1.6292 | 0.83 | 0.8163 | 0.1984 | 0.0591 |
| 0.0652 | 48.0 | 1200 | 0.4146 | 0.825 | 0.2629 | 1.5735 | 0.825 | 0.8137 | 0.1998 | 0.0589 |
| 0.0652 | 49.0 | 1225 | 0.4143 | 0.83 | 0.2630 | 1.5276 | 0.83 | 0.8163 | 0.2116 | 0.0599 |
| 0.0652 | 50.0 | 1250 | 0.4140 | 0.83 | 0.2628 | 1.5705 | 0.83 | 0.8163 | 0.1966 | 0.0590 |
| 0.0652 | 51.0 | 1275 | 0.4152 | 0.825 | 0.2637 | 1.5747 | 0.825 | 0.8138 | 0.1835 | 0.0593 |
| 0.0652 | 52.0 | 1300 | 0.4145 | 0.825 | 0.2629 | 1.5796 | 0.825 | 0.8137 | 0.1926 | 0.0593 |
| 0.0652 | 53.0 | 1325 | 0.4147 | 0.825 | 0.2631 | 1.6323 | 0.825 | 0.8138 | 0.1838 | 0.0588 |
| 0.0652 | 54.0 | 1350 | 0.4141 | 0.83 | 0.2628 | 1.5763 | 0.83 | 0.8163 | 0.2035 | 0.0592 |
| 0.0652 | 55.0 | 1375 | 0.4137 | 0.83 | 0.2630 | 1.5751 | 0.83 | 0.8163 | 0.2042 | 0.0590 |
| 0.0652 | 56.0 | 1400 | 0.4145 | 0.83 | 0.2632 | 1.6307 | 0.83 | 0.8163 | 0.1981 | 0.0588 |
| 0.0652 | 57.0 | 1425 | 0.4149 | 0.825 | 0.2634 | 1.5225 | 0.825 | 0.8137 | 0.2008 | 0.0589 |
| 0.0652 | 58.0 | 1450 | 0.4146 | 0.83 | 0.2634 | 1.5725 | 0.83 | 0.8163 | 0.2121 | 0.0589 |
| 0.0652 | 59.0 | 1475 | 0.4142 | 0.83 | 0.2632 | 1.5214 | 0.83 | 0.8163 | 0.2028 | 0.0590 |
| 0.0614 | 60.0 | 1500 | 0.4145 | 0.83 | 0.2634 | 1.5237 | 0.83 | 0.8163 | 0.1981 | 0.0585 |
| 0.0614 | 61.0 | 1525 | 0.4142 | 0.83 | 0.2630 | 1.5710 | 0.83 | 0.8163 | 0.2070 | 0.0591 |
| 0.0614 | 62.0 | 1550 | 0.4139 | 0.825 | 0.2631 | 1.5733 | 0.825 | 0.8135 | 0.1986 | 0.0594 |
| 0.0614 | 63.0 | 1575 | 0.4139 | 0.825 | 0.2630 | 1.5813 | 0.825 | 0.8136 | 0.1984 | 0.0593 |
| 0.0614 | 64.0 | 1600 | 0.4138 | 0.83 | 0.2629 | 1.5729 | 0.83 | 0.8163 | 0.2035 | 0.0590 |
| 0.0614 | 65.0 | 1625 | 0.4139 | 0.825 | 0.2629 | 1.5715 | 0.825 | 0.8136 | 0.2026 | 0.0593 |
| 0.0614 | 66.0 | 1650 | 0.4136 | 0.825 | 0.2629 | 1.5768 | 0.825 | 0.8135 | 0.1988 | 0.0592 |
| 0.0614 | 67.0 | 1675 | 0.4139 | 0.825 | 0.2629 | 1.5709 | 0.825 | 0.8135 | 0.1987 | 0.0593 |
| 0.0614 | 68.0 | 1700 | 0.4143 | 0.825 | 0.2633 | 1.5744 | 0.825 | 0.8138 | 0.1896 | 0.0595 |
| 0.0614 | 69.0 | 1725 | 0.4142 | 0.825 | 0.2632 | 1.5752 | 0.825 | 0.8138 | 0.1896 | 0.0593 |
| 0.0614 | 70.0 | 1750 | 0.4142 | 0.825 | 0.2632 | 1.5769 | 0.825 | 0.8138 | 0.1879 | 0.0594 |
| 0.0614 | 71.0 | 1775 | 0.4138 | 0.83 | 0.2630 | 1.5734 | 0.83 | 0.8163 | 0.2073 | 0.0588 |
| 0.0614 | 72.0 | 1800 | 0.4140 | 0.825 | 0.2631 | 1.5734 | 0.825 | 0.8138 | 0.1977 | 0.0593 |
| 0.0614 | 73.0 | 1825 | 0.4135 | 0.83 | 0.2629 | 1.5711 | 0.83 | 0.8163 | 0.2035 | 0.0589 |
| 0.0614 | 74.0 | 1850 | 0.4140 | 0.83 | 0.2632 | 1.5717 | 0.83 | 0.8163 | 0.2038 | 0.0590 |
| 0.0614 | 75.0 | 1875 | 0.4141 | 0.825 | 0.2633 | 1.5205 | 0.825 | 0.8138 | 0.1838 | 0.0593 |
| 0.0614 | 76.0 | 1900 | 0.4138 | 0.825 | 0.2631 | 1.5218 | 0.825 | 0.8137 | 0.1838 | 0.0595 |
| 0.0614 | 77.0 | 1925 | 0.4134 | 0.825 | 0.2628 | 1.5710 | 0.825 | 0.8135 | 0.1937 | 0.0591 |
| 0.0614 | 78.0 | 1950 | 0.4135 | 0.83 | 0.2629 | 1.5688 | 0.83 | 0.8163 | 0.2067 | 0.0588 |
| 0.0614 | 79.0 | 1975 | 0.4138 | 0.825 | 0.2631 | 1.5143 | 0.825 | 0.8137 | 0.1942 | 0.0592 |
| 0.0613 | 80.0 | 2000 | 0.4134 | 0.825 | 0.2628 | 1.5152 | 0.825 | 0.8135 | 0.1939 | 0.0591 |
| 0.0613 | 81.0 | 2025 | 0.4139 | 0.825 | 0.2632 | 1.5144 | 0.825 | 0.8136 | 0.1903 | 0.0593 |
| 0.0613 | 82.0 | 2050 | 0.4139 | 0.83 | 0.2632 | 1.5242 | 0.83 | 0.8163 | 0.1894 | 0.0589 |
| 0.0613 | 83.0 | 2075 | 0.4138 | 0.825 | 0.2631 | 1.5159 | 0.825 | 0.8136 | 0.2014 | 0.0594 |
| 0.0613 | 84.0 | 2100 | 0.4137 | 0.825 | 0.2631 | 1.5707 | 0.825 | 0.8136 | 0.1954 | 0.0592 |
| 0.0613 | 85.0 | 2125 | 0.4136 | 0.825 | 0.2630 | 1.5252 | 0.825 | 0.8136 | 0.1878 | 0.0592 |
| 0.0613 | 86.0 | 2150 | 0.4138 | 0.83 | 0.2630 | 1.5186 | 0.83 | 0.8172 | 0.2024 | 0.0588 |
| 0.0613 | 87.0 | 2175 | 0.4139 | 0.825 | 0.2632 | 1.5201 | 0.825 | 0.8138 | 0.1927 | 0.0592 |
| 0.0613 | 88.0 | 2200 | 0.4138 | 0.83 | 0.2631 | 1.5285 | 0.83 | 0.8172 | 0.1897 | 0.0591 |
| 0.0613 | 89.0 | 2225 | 0.4137 | 0.825 | 0.2631 | 1.5185 | 0.825 | 0.8136 | 0.1956 | 0.0593 |
| 0.0613 | 90.0 | 2250 | 0.4137 | 0.83 | 0.2631 | 1.5212 | 0.83 | 0.8172 | 0.2007 | 0.0591 |
| 0.0613 | 91.0 | 2275 | 0.4138 | 0.825 | 0.2631 | 1.5185 | 0.825 | 0.8138 | 0.1915 | 0.0593 |
| 0.0613 | 92.0 | 2300 | 0.4136 | 0.83 | 0.2630 | 1.5174 | 0.83 | 0.8172 | 0.2067 | 0.0590 |
| 0.0613 | 93.0 | 2325 | 0.4137 | 0.83 | 0.2631 | 1.5204 | 0.83 | 0.8172 | 0.1939 | 0.0591 |
| 0.0613 | 94.0 | 2350 | 0.4137 | 0.83 | 0.2631 | 1.5255 | 0.83 | 0.8172 | 0.2007 | 0.0592 |
| 0.0613 | 95.0 | 2375 | 0.4137 | 0.83 | 0.2631 | 1.5161 | 0.83 | 0.8172 | 0.1966 | 0.0591 |
| 0.0613 | 96.0 | 2400 | 0.4136 | 0.83 | 0.2630 | 1.5180 | 0.83 | 0.8172 | 0.2007 | 0.0590 |
| 0.0613 | 97.0 | 2425 | 0.4137 | 0.83 | 0.2631 | 1.5176 | 0.83 | 0.8172 | 0.1966 | 0.0591 |
| 0.0613 | 98.0 | 2450 | 0.4137 | 0.83 | 0.2631 | 1.5194 | 0.83 | 0.8172 | 0.1966 | 0.0590 |
| 0.0613 | 99.0 | 2475 | 0.4137 | 0.83 | 0.2631 | 1.5195 | 0.83 | 0.8172 | 0.2005 | 0.0591 |
| 0.0613 | 100.0 | 2500 | 0.4137 | 0.83 | 0.2631 | 1.5189 | 0.83 | 0.8172 | 0.2007 | 0.0591 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
shirsendu-ryo/bert-finetuned-squad | shirsendu-ryo | 2023-07-13T17:42:19Z | 72 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-07-11T05:57:26Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: shirsendu-ryo/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# shirsendu-ryo/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2785
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2785 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jrosenzw/autotrain-diabetes-detection-2-74371139581 | jrosenzw | 2023-07-13T17:41:53Z | 0 | 0 | sklearn | [
"sklearn",
"joblib",
"xgboost",
"autotrain",
"tabular",
"classification",
"tabular-classification",
"dataset:jrosenzw/autotrain-data-diabetes-detection-2",
"co2_eq_emissions",
"region:us"
]
| tabular-classification | 2023-07-13T17:18:07Z | ---
tags:
- autotrain
- tabular
- classification
- tabular-classification
datasets:
- jrosenzw/autotrain-data-diabetes-detection-2
co2_eq_emissions:
emissions: 0.6960832080920549
library_name: sklearn
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 74371139581
- CO2 Emissions (in grams): 0.6961
## Validation Metrics
- Loss: 0.468
- Accuracy: 0.753
- Precision: 0.667
- Recall: 0.593
- AUC: 0.839
- F1: 0.627
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
``` |
koruni/charsembeds | koruni | 2023-07-13T17:34:37Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-07-13T17:30:36Z | ---
license: creativeml-openrail-m
---
|
grace-pro/xlmr-base-finetuned-hausa-2e-3 | grace-pro | 2023-07-13T17:31:39Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-07-13T17:03:58Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlmr-base-finetuned-hausa-2e-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-base-finetuned-hausa-2e-3
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2694
- Precision: 0.1719
- Recall: 0.0235
- F1: 0.0414
- Accuracy: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2724 | 1.0 | 1312 | 0.2700 | 0.1719 | 0.0235 | 0.0414 | 0.9247 |
| 0.2754 | 2.0 | 2624 | 0.2689 | 0.1719 | 0.0235 | 0.0414 | 0.9247 |
| 0.2743 | 3.0 | 3936 | 0.2708 | 0.1719 | 0.0235 | 0.0414 | 0.9247 |
| 0.2745 | 4.0 | 5248 | 0.2692 | 0.1719 | 0.0235 | 0.0414 | 0.9247 |
| 0.2713 | 5.0 | 6560 | 0.2694 | 0.1719 | 0.0235 | 0.0414 | 0.9247 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sheileshr/roaModel | sheileshr | 2023-07-13T17:14:57Z | 0 | 0 | keras | [
"keras",
"zero-shot-classification",
"en",
"dataset:openchat/openchat_sharegpt4_dataset",
"arxiv:1910.09700",
"license:lgpl-3.0",
"region:us"
]
| zero-shot-classification | 2023-07-13T17:11:47Z | ---
license: lgpl-3.0
datasets:
- openchat/openchat_sharegpt4_dataset
language:
- en
metrics:
- accuracy
library_name: keras
pipeline_tag: zero-shot-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Akshay-123/vit-base-patch16-224-in21k | Akshay-123 | 2023-07-13T17:14:53Z | 222 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-07-13T16:54:32Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: vit-base-patch16-224-in21k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7692
- F1: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 10 | 1.5877 | 0.6675 |
| No log | 2.0 | 20 | 1.4149 | 0.8402 |
| No log | 3.0 | 30 | 1.2687 | 0.8917 |
| No log | 4.0 | 40 | 1.1382 | 0.9113 |
| No log | 5.0 | 50 | 1.0214 | 0.9523 |
| No log | 6.0 | 60 | 0.9285 | 0.9662 |
| No log | 7.0 | 70 | 0.8601 | 0.9728 |
| No log | 8.0 | 80 | 0.8089 | 0.9797 |
| No log | 9.0 | 90 | 0.7796 | 0.9865 |
| No log | 10.0 | 100 | 0.7692 | 0.9865 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ayanban011/6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.5 | ayanban011 | 2023-07-13T17:14:08Z | 165 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-07-13T15:21:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.5
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4277
- Accuracy: 0.835
- Brier Loss: 0.2653
- Nll: 1.5700
- F1 Micro: 0.835
- F1 Macro: 0.8164
- Ece: 0.1805
- Aurc: 0.0632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 1.6826 | 0.23 | 0.8622 | 4.7953 | 0.23 | 0.1892 | 0.2929 | 0.7651 |
| No log | 2.0 | 50 | 1.0374 | 0.53 | 0.6004 | 2.7646 | 0.53 | 0.4280 | 0.2624 | 0.2619 |
| No log | 3.0 | 75 | 0.8158 | 0.665 | 0.4678 | 2.4034 | 0.665 | 0.5565 | 0.2488 | 0.1416 |
| No log | 4.0 | 100 | 0.6879 | 0.72 | 0.3838 | 1.5355 | 0.72 | 0.6873 | 0.2297 | 0.1064 |
| No log | 5.0 | 125 | 0.6511 | 0.775 | 0.3538 | 1.5183 | 0.775 | 0.7285 | 0.2235 | 0.0915 |
| No log | 6.0 | 150 | 0.7310 | 0.755 | 0.3579 | 1.3899 | 0.755 | 0.7257 | 0.2190 | 0.0926 |
| No log | 7.0 | 175 | 0.5698 | 0.795 | 0.3246 | 1.3920 | 0.795 | 0.7691 | 0.2251 | 0.0956 |
| No log | 8.0 | 200 | 0.5675 | 0.805 | 0.3064 | 1.4278 | 0.805 | 0.7733 | 0.2093 | 0.0655 |
| No log | 9.0 | 225 | 0.5986 | 0.8 | 0.3356 | 1.5317 | 0.8000 | 0.7890 | 0.2249 | 0.0913 |
| No log | 10.0 | 250 | 0.6158 | 0.755 | 0.3475 | 1.5027 | 0.755 | 0.7340 | 0.2152 | 0.0782 |
| No log | 11.0 | 275 | 0.5353 | 0.815 | 0.3037 | 1.6003 | 0.815 | 0.8143 | 0.2305 | 0.0749 |
| No log | 12.0 | 300 | 0.5460 | 0.825 | 0.3008 | 1.7407 | 0.825 | 0.8070 | 0.2378 | 0.0856 |
| No log | 13.0 | 325 | 0.4905 | 0.83 | 0.2787 | 1.1328 | 0.83 | 0.8099 | 0.2344 | 0.0481 |
| No log | 14.0 | 350 | 0.4913 | 0.795 | 0.2881 | 1.2261 | 0.795 | 0.7521 | 0.2121 | 0.0661 |
| No log | 15.0 | 375 | 0.4773 | 0.835 | 0.2753 | 1.2716 | 0.835 | 0.8140 | 0.2125 | 0.0636 |
| No log | 16.0 | 400 | 0.4848 | 0.84 | 0.2751 | 1.5983 | 0.8400 | 0.8139 | 0.2195 | 0.0707 |
| No log | 17.0 | 425 | 0.4994 | 0.805 | 0.2886 | 1.5637 | 0.805 | 0.7689 | 0.2049 | 0.0617 |
| No log | 18.0 | 450 | 0.4610 | 0.835 | 0.2871 | 1.3906 | 0.835 | 0.8122 | 0.2175 | 0.0675 |
| No log | 19.0 | 475 | 0.4594 | 0.84 | 0.2669 | 1.2217 | 0.8400 | 0.8214 | 0.2022 | 0.0516 |
| 0.4534 | 20.0 | 500 | 0.4793 | 0.815 | 0.2874 | 1.4445 | 0.815 | 0.7965 | 0.2024 | 0.0641 |
| 0.4534 | 21.0 | 525 | 0.5185 | 0.785 | 0.3215 | 1.8358 | 0.785 | 0.7743 | 0.2250 | 0.0850 |
| 0.4534 | 22.0 | 550 | 0.4339 | 0.83 | 0.2635 | 1.2137 | 0.83 | 0.8200 | 0.1944 | 0.0610 |
| 0.4534 | 23.0 | 575 | 0.4640 | 0.825 | 0.2770 | 1.4137 | 0.825 | 0.8086 | 0.1800 | 0.0674 |
| 0.4534 | 24.0 | 600 | 0.4528 | 0.825 | 0.2692 | 1.3148 | 0.825 | 0.8077 | 0.1912 | 0.0678 |
| 0.4534 | 25.0 | 625 | 0.4361 | 0.84 | 0.2600 | 1.4205 | 0.8400 | 0.8278 | 0.2066 | 0.0534 |
| 0.4534 | 26.0 | 650 | 0.4239 | 0.835 | 0.2590 | 1.2112 | 0.835 | 0.8224 | 0.1850 | 0.0544 |
| 0.4534 | 27.0 | 675 | 0.4294 | 0.82 | 0.2636 | 1.2671 | 0.82 | 0.8023 | 0.1866 | 0.0619 |
| 0.4534 | 28.0 | 700 | 0.4327 | 0.84 | 0.2633 | 1.3084 | 0.8400 | 0.8283 | 0.1954 | 0.0628 |
| 0.4534 | 29.0 | 725 | 0.4309 | 0.825 | 0.2640 | 1.4275 | 0.825 | 0.8022 | 0.2117 | 0.0667 |
| 0.4534 | 30.0 | 750 | 0.4299 | 0.83 | 0.2636 | 1.3161 | 0.83 | 0.8103 | 0.2110 | 0.0620 |
| 0.4534 | 31.0 | 775 | 0.4345 | 0.835 | 0.2634 | 1.4605 | 0.835 | 0.8269 | 0.1998 | 0.0562 |
| 0.4534 | 32.0 | 800 | 0.4404 | 0.83 | 0.2743 | 1.3965 | 0.83 | 0.8077 | 0.2198 | 0.0669 |
| 0.4534 | 33.0 | 825 | 0.4254 | 0.83 | 0.2614 | 1.3734 | 0.83 | 0.8133 | 0.1990 | 0.0567 |
| 0.4534 | 34.0 | 850 | 0.4271 | 0.835 | 0.2632 | 1.3963 | 0.835 | 0.8164 | 0.1932 | 0.0649 |
| 0.4534 | 35.0 | 875 | 0.4284 | 0.835 | 0.2636 | 1.3713 | 0.835 | 0.8164 | 0.2127 | 0.0634 |
| 0.4534 | 36.0 | 900 | 0.4262 | 0.835 | 0.2628 | 1.4403 | 0.835 | 0.8164 | 0.1926 | 0.0649 |
| 0.4534 | 37.0 | 925 | 0.4253 | 0.835 | 0.2621 | 1.3813 | 0.835 | 0.8164 | 0.2015 | 0.0628 |
| 0.4534 | 38.0 | 950 | 0.4262 | 0.835 | 0.2626 | 1.4528 | 0.835 | 0.8164 | 0.1971 | 0.0628 |
| 0.4534 | 39.0 | 975 | 0.4271 | 0.835 | 0.2629 | 1.4410 | 0.835 | 0.8164 | 0.1933 | 0.0627 |
| 0.0663 | 40.0 | 1000 | 0.4283 | 0.835 | 0.2639 | 1.4647 | 0.835 | 0.8164 | 0.1996 | 0.0631 |
| 0.0663 | 41.0 | 1025 | 0.4272 | 0.835 | 0.2639 | 1.4417 | 0.835 | 0.8164 | 0.2088 | 0.0630 |
| 0.0663 | 42.0 | 1050 | 0.4276 | 0.835 | 0.2640 | 1.3976 | 0.835 | 0.8164 | 0.1992 | 0.0634 |
| 0.0663 | 43.0 | 1075 | 0.4270 | 0.835 | 0.2633 | 1.4392 | 0.835 | 0.8164 | 0.1892 | 0.0628 |
| 0.0663 | 44.0 | 1100 | 0.4264 | 0.835 | 0.2635 | 1.4429 | 0.835 | 0.8164 | 0.1885 | 0.0631 |
| 0.0663 | 45.0 | 1125 | 0.4269 | 0.835 | 0.2637 | 1.4461 | 0.835 | 0.8164 | 0.1974 | 0.0629 |
| 0.0663 | 46.0 | 1150 | 0.4268 | 0.835 | 0.2636 | 1.4415 | 0.835 | 0.8164 | 0.1866 | 0.0625 |
| 0.0663 | 47.0 | 1175 | 0.4269 | 0.835 | 0.2641 | 1.4646 | 0.835 | 0.8164 | 0.1812 | 0.0636 |
| 0.0663 | 48.0 | 1200 | 0.4271 | 0.835 | 0.2639 | 1.3990 | 0.835 | 0.8164 | 0.1865 | 0.0631 |
| 0.0663 | 49.0 | 1225 | 0.4267 | 0.835 | 0.2639 | 1.4474 | 0.835 | 0.8164 | 0.1946 | 0.0629 |
| 0.0663 | 50.0 | 1250 | 0.4273 | 0.835 | 0.2642 | 1.4492 | 0.835 | 0.8164 | 0.1802 | 0.0631 |
| 0.0663 | 51.0 | 1275 | 0.4272 | 0.835 | 0.2644 | 1.4475 | 0.835 | 0.8164 | 0.1942 | 0.0630 |
| 0.0663 | 52.0 | 1300 | 0.4283 | 0.835 | 0.2648 | 1.5157 | 0.835 | 0.8164 | 0.1963 | 0.0635 |
| 0.0663 | 53.0 | 1325 | 0.4271 | 0.835 | 0.2643 | 1.5046 | 0.835 | 0.8164 | 0.1955 | 0.0633 |
| 0.0663 | 54.0 | 1350 | 0.4271 | 0.835 | 0.2642 | 1.4629 | 0.835 | 0.8164 | 0.1790 | 0.0617 |
| 0.0663 | 55.0 | 1375 | 0.4278 | 0.835 | 0.2649 | 1.5752 | 0.835 | 0.8164 | 0.2007 | 0.0635 |
| 0.0663 | 56.0 | 1400 | 0.4280 | 0.835 | 0.2648 | 1.5165 | 0.835 | 0.8164 | 0.1706 | 0.0631 |
| 0.0663 | 57.0 | 1425 | 0.4275 | 0.835 | 0.2644 | 1.5134 | 0.835 | 0.8164 | 0.1864 | 0.0629 |
| 0.0663 | 58.0 | 1450 | 0.4270 | 0.835 | 0.2643 | 1.5088 | 0.835 | 0.8164 | 0.1883 | 0.0630 |
| 0.0663 | 59.0 | 1475 | 0.4273 | 0.835 | 0.2644 | 1.5111 | 0.835 | 0.8164 | 0.1951 | 0.0630 |
| 0.0615 | 60.0 | 1500 | 0.4281 | 0.835 | 0.2651 | 1.5727 | 0.835 | 0.8164 | 0.2084 | 0.0630 |
| 0.0615 | 61.0 | 1525 | 0.4271 | 0.835 | 0.2647 | 1.5198 | 0.835 | 0.8164 | 0.1957 | 0.0631 |
| 0.0615 | 62.0 | 1550 | 0.4276 | 0.835 | 0.2649 | 1.5139 | 0.835 | 0.8164 | 0.1969 | 0.0630 |
| 0.0615 | 63.0 | 1575 | 0.4269 | 0.835 | 0.2646 | 1.4579 | 0.835 | 0.8164 | 0.1802 | 0.0629 |
| 0.0615 | 64.0 | 1600 | 0.4275 | 0.835 | 0.2648 | 1.5144 | 0.835 | 0.8164 | 0.2006 | 0.0632 |
| 0.0615 | 65.0 | 1625 | 0.4276 | 0.835 | 0.2649 | 1.5129 | 0.835 | 0.8164 | 0.1846 | 0.0632 |
| 0.0615 | 66.0 | 1650 | 0.4272 | 0.835 | 0.2647 | 1.5165 | 0.835 | 0.8164 | 0.1796 | 0.0629 |
| 0.0615 | 67.0 | 1675 | 0.4273 | 0.835 | 0.2647 | 1.5141 | 0.835 | 0.8164 | 0.1882 | 0.0631 |
| 0.0615 | 68.0 | 1700 | 0.4276 | 0.835 | 0.2649 | 1.5146 | 0.835 | 0.8164 | 0.1799 | 0.0631 |
| 0.0615 | 69.0 | 1725 | 0.4275 | 0.835 | 0.2649 | 1.5215 | 0.835 | 0.8164 | 0.1799 | 0.0631 |
| 0.0615 | 70.0 | 1750 | 0.4275 | 0.835 | 0.2647 | 1.5124 | 0.835 | 0.8164 | 0.1884 | 0.0632 |
| 0.0615 | 71.0 | 1775 | 0.4278 | 0.835 | 0.2652 | 1.5245 | 0.835 | 0.8164 | 0.1800 | 0.0631 |
| 0.0615 | 72.0 | 1800 | 0.4277 | 0.835 | 0.2650 | 1.5169 | 0.835 | 0.8164 | 0.1802 | 0.0631 |
| 0.0615 | 73.0 | 1825 | 0.4277 | 0.835 | 0.2651 | 1.5282 | 0.835 | 0.8164 | 0.1804 | 0.0633 |
| 0.0615 | 74.0 | 1850 | 0.4273 | 0.835 | 0.2650 | 1.5156 | 0.835 | 0.8164 | 0.1804 | 0.0632 |
| 0.0615 | 75.0 | 1875 | 0.4278 | 0.835 | 0.2653 | 1.5706 | 0.835 | 0.8164 | 0.1804 | 0.0632 |
| 0.0615 | 76.0 | 1900 | 0.4275 | 0.835 | 0.2651 | 1.5337 | 0.835 | 0.8164 | 0.1807 | 0.0633 |
| 0.0615 | 77.0 | 1925 | 0.4276 | 0.835 | 0.2652 | 1.5357 | 0.835 | 0.8164 | 0.1804 | 0.0633 |
| 0.0615 | 78.0 | 1950 | 0.4275 | 0.835 | 0.2651 | 1.5701 | 0.835 | 0.8164 | 0.1805 | 0.0633 |
| 0.0615 | 79.0 | 1975 | 0.4277 | 0.835 | 0.2651 | 1.5161 | 0.835 | 0.8164 | 0.1807 | 0.0633 |
| 0.0614 | 80.0 | 2000 | 0.4278 | 0.835 | 0.2653 | 1.5709 | 0.835 | 0.8164 | 0.1808 | 0.0632 |
| 0.0614 | 81.0 | 2025 | 0.4278 | 0.835 | 0.2653 | 1.5703 | 0.835 | 0.8164 | 0.1804 | 0.0632 |
| 0.0614 | 82.0 | 2050 | 0.4278 | 0.835 | 0.2653 | 1.5700 | 0.835 | 0.8164 | 0.1806 | 0.0633 |
| 0.0614 | 83.0 | 2075 | 0.4277 | 0.835 | 0.2652 | 1.5700 | 0.835 | 0.8164 | 0.1803 | 0.0631 |
| 0.0614 | 84.0 | 2100 | 0.4276 | 0.835 | 0.2652 | 1.5694 | 0.835 | 0.8164 | 0.1804 | 0.0632 |
| 0.0614 | 85.0 | 2125 | 0.4275 | 0.835 | 0.2652 | 1.5702 | 0.835 | 0.8164 | 0.1807 | 0.0633 |
| 0.0614 | 86.0 | 2150 | 0.4276 | 0.835 | 0.2652 | 1.5699 | 0.835 | 0.8164 | 0.1805 | 0.0633 |
| 0.0614 | 87.0 | 2175 | 0.4277 | 0.835 | 0.2653 | 1.5703 | 0.835 | 0.8164 | 0.1805 | 0.0633 |
| 0.0614 | 88.0 | 2200 | 0.4277 | 0.835 | 0.2652 | 1.5702 | 0.835 | 0.8164 | 0.1882 | 0.0632 |
| 0.0614 | 89.0 | 2225 | 0.4277 | 0.835 | 0.2653 | 1.5702 | 0.835 | 0.8164 | 0.1806 | 0.0633 |
| 0.0614 | 90.0 | 2250 | 0.4276 | 0.835 | 0.2653 | 1.5696 | 0.835 | 0.8164 | 0.1806 | 0.0633 |
| 0.0614 | 91.0 | 2275 | 0.4277 | 0.835 | 0.2653 | 1.5698 | 0.835 | 0.8164 | 0.1805 | 0.0632 |
| 0.0614 | 92.0 | 2300 | 0.4276 | 0.835 | 0.2652 | 1.5699 | 0.835 | 0.8164 | 0.1805 | 0.0632 |
| 0.0614 | 93.0 | 2325 | 0.4277 | 0.835 | 0.2653 | 1.5700 | 0.835 | 0.8164 | 0.1805 | 0.0632 |
| 0.0614 | 94.0 | 2350 | 0.4276 | 0.835 | 0.2653 | 1.5698 | 0.835 | 0.8164 | 0.1805 | 0.0632 |
| 0.0614 | 95.0 | 2375 | 0.4277 | 0.835 | 0.2653 | 1.5699 | 0.835 | 0.8164 | 0.1805 | 0.0632 |
| 0.0614 | 96.0 | 2400 | 0.4276 | 0.835 | 0.2653 | 1.5700 | 0.835 | 0.8164 | 0.1805 | 0.0632 |
| 0.0614 | 97.0 | 2425 | 0.4277 | 0.835 | 0.2653 | 1.5699 | 0.835 | 0.8164 | 0.1805 | 0.0632 |
| 0.0614 | 98.0 | 2450 | 0.4276 | 0.835 | 0.2653 | 1.5699 | 0.835 | 0.8164 | 0.1805 | 0.0632 |
| 0.0614 | 99.0 | 2475 | 0.4277 | 0.835 | 0.2653 | 1.5700 | 0.835 | 0.8164 | 0.1805 | 0.0632 |
| 0.0614 | 100.0 | 2500 | 0.4277 | 0.835 | 0.2653 | 1.5700 | 0.835 | 0.8164 | 0.1805 | 0.0632 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Arup-Dutta-Bappy/bert-base-cased-finetuned-squad | Arup-Dutta-Bappy | 2023-07-13T16:53:32Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-07-13T14:36:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
anyachan/ernalora | anyachan | 2023-07-13T16:46:05Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-07-13T16:41:22Z | ---
license: creativeml-openrail-m
---
|
grace-pro/afriberta-base-finetuned-hausa-2e-3 | grace-pro | 2023-07-13T16:45:14Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-07-13T16:28:08Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-base-finetuned-hausa-2e-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-base-finetuned-hausa-2e-3
This model is a fine-tuned version of [castorini/afriberta_base](https://huggingface.co/castorini/afriberta_base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2360
- Precision: 0.1719
- Recall: 0.0276
- F1: 0.0476
- Accuracy: 0.9373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2428 | 1.0 | 1312 | 0.2368 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2435 | 2.0 | 2624 | 0.2385 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2428 | 3.0 | 3936 | 0.2371 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2434 | 4.0 | 5248 | 0.2359 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2411 | 5.0 | 6560 | 0.2360 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Gustavosta/SowlfieModelsRVC | Gustavosta | 2023-07-13T16:44:17Z | 0 | 2 | null | [
"audio-to-audio",
"pt",
"en",
"license:mit",
"region:us"
]
| audio-to-audio | 2023-07-09T22:53:01Z | ---
license: mit
language:
- pt
- en
pipeline_tag: audio-to-audio
---
# Sowlfie Models RVC
Repository with public **models for RVC** that I make. (**Open commissions** on "@lengodev" on Discord)
Have suggestions? [Open an issue](https://huggingface.co/Gustavosta/SowlfieModelsRVC/discussions/new)!
## 🎤 RVC Models:
| Model | Dataset | Epochs | Language | Sample |
|---|:---:|---:|---:|---:|
| [Pica-Pau (Woody Woodpecker PT-BR)](https://huggingface.co/Gustavosta/SowlfieModelsRVC/resolve/main/pica-pau-model-rvc-v2.zip) | [8 Minutes, 120 segments](https://drive.google.com/file/d/1t37uofCRrohhPLxcXfJWlfhIU_afwIdM/view?usp=sharing) | 400 Epochs | 🇧🇷 Brazilian Portuguese | [Bolo de morango de cada estado](https://youtu.be/UxmEFyC4R_0) |
## ❓ How to use a model?
If you've never used RVC v2 before, I recommend checking out **[this guide](https://docs.google.com/document/d/13_l1bd1Osgz7qlAZn-zhklCbHpVRk6bYOuAuB78qmsE/edit?pli=1)**.
To use a model from this repository, you will **need the URL of the `.zip` model** file in the repository and modify the URL, adding "`/resolve/main/`" in the **URL slug before the filename**. Then you can **use the URL in the model download field**.
**URL Example**:
```
https://huggingface.co/Gustavosta/SowlfieModelsRVC/resolve/main/model-filename.zip
```
## ⚖️ Licence:
[MIT](https://huggingface.co/models?license=license:mit) Licence
---
⚠️ It's hard work to **build datasets**, **train models** and make them **available for free**. So if you use the model, please **credit the model** under the name of `Sowlfie Models` or `Gustavosta`.
Anyway, **thanks for reading this far**! 🤝 |
brunogs/distilbert-base-uncased-finetuned-cola | brunogs | 2023-07-13T16:42:33Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-13T15:53:06Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: brunogs/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# brunogs/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1860
- Validation Loss: 0.5510
- Train Matthews Correlation: 0.5076
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5165 | 0.4641 | 0.4474 | 0 |
| 0.3176 | 0.4989 | 0.5060 | 1 |
| 0.1860 | 0.5510 | 0.5076 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Tanor/BERTovoSENTNEG4 | Tanor | 2023-07-13T16:40:23Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-08T20:56:45Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: BERTovoSENTNEG4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTovoSENTNEG4
This model is a fine-tuned version of [Tanor/BERTicovoSENTNEG4](https://huggingface.co/Tanor/BERTicovoSENTNEG4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0675
- F1: 0.5778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 53 | 0.0491 | 0.0800 |
| No log | 2.0 | 106 | 0.0487 | 0.5294 |
| No log | 3.0 | 159 | 0.0517 | 0.5714 |
| No log | 4.0 | 212 | 0.0585 | 0.5366 |
| No log | 5.0 | 265 | 0.0675 | 0.5778 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Daniil-plotnikov/russian-vision-v5-beta | Daniil-plotnikov | 2023-07-13T16:37:04Z | 35 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"ru",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-12T22:47:22Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
language:
- ru
- en
---
Модель основанная на Stable Deffusion, которая понимает русский язык! На базовом уровне , но понимает! |
1aurent/poca-SoccerTwos | 1aurent | 2023-07-13T16:33:04Z | 25 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-07-13T15:40:45Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: 1aurent/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
grace-pro/afriberta-large-finetuned-hausa-2e-3 | grace-pro | 2023-07-13T16:24:53Z | 126 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-07-13T16:02:55Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-large-finetuned-hausa-2e-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-large-finetuned-hausa-2e-3
This model is a fine-tuned version of [castorini/afriberta_large](https://huggingface.co/castorini/afriberta_large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- Precision: 0.1719
- Recall: 0.0276
- F1: 0.0476
- Accuracy: 0.9373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2428 | 1.0 | 1312 | 0.2358 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2436 | 2.0 | 2624 | 0.2366 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2429 | 3.0 | 3936 | 0.2365 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2434 | 4.0 | 5248 | 0.2358 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
| 0.2411 | 5.0 | 6560 | 0.2359 | 0.1719 | 0.0276 | 0.0476 | 0.9373 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
VK246/IC_ver6b_coco_swin_gpt2_50Bpc_1e | VK246 | 2023-07-13T16:16:38Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:coco",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2023-07-13T12:40:57Z | ---
tags:
- generated_from_trainer
datasets:
- coco
metrics:
- rouge
- bleu
model-index:
- name: IC_ver6b_coco_swin_gpt2_50Bpc_1e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IC_ver6b_coco_swin_gpt2_50Bpc_1e
This model is a fine-tuned version of [VK246/IC_ver6a_coco_swin_gpt2_50Apc_1e](https://huggingface.co/VK246/IC_ver6a_coco_swin_gpt2_50Apc_1e) on the coco dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8180
- Rouge1: 41.462
- Rouge2: 16.1291
- Rougel: 37.6518
- Rougelsum: 37.6471
- Bleu: 9.9643
- Gen Len: 11.3063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:------:|:-------:|
| 0.8889 | 0.17 | 500 | 0.8659 | 39.7654 | 14.603 | 36.1709 | 36.1638 | 8.8491 | 11.3063 |
| 0.8756 | 0.34 | 1000 | 0.8515 | 40.3678 | 15.2852 | 36.7303 | 36.7188 | 9.3029 | 11.3063 |
| 0.862 | 0.51 | 1500 | 0.8388 | 40.7537 | 15.2635 | 37.0523 | 37.0379 | 9.3057 | 11.3063 |
| 0.8546 | 0.68 | 2000 | 0.8281 | 40.961 | 15.6192 | 37.1627 | 37.1546 | 9.7453 | 11.3063 |
| 0.837 | 0.85 | 2500 | 0.8214 | 41.5703 | 16.1006 | 37.7767 | 37.7654 | 9.9062 | 11.3063 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
raehik/iccs-ss23-hf-model | raehik | 2023-07-13T15:53:29Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2023-07-13T15:21:10Z | ---
license: mit
---
Hello, this is a model trained on the MNIST dataset!
The code used to train (and save) the model is over here:
https://github.com/Cambridge-ICCS/ss23-hf-code .
|
soBeauty/20230714-1-xlm-roberta-base-confusion | soBeauty | 2023-07-13T15:25:56Z | 158 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-07-13T14:55:35Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 20230714-1-xlm-roberta-base-confusion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230714-1-xlm-roberta-base-confusion
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.4259
- Loss: 3.1665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 3.9983 | 3.85 | 500 | 0.3590 | 3.3866 |
| 3.3131 | 7.69 | 1000 | 0.4259 | 3.1665 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
orya16215/ppo-Huggy | orya16215 | 2023-07-13T15:17:58Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-07-13T15:17:55Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: orya16215/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
akifhasan/sabbur | akifhasan | 2023-07-13T15:13:36Z | 4 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-13T15:06:48Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### sabbur Dreambooth model trained by akifhasan with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Subsets and Splits