modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
CyberHarem/hoto_kokoa_istheorderarabbit | CyberHarem | 2023-09-29T19:37:38Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/hoto_kokoa_istheorderarabbit",
"license:mit",
"region:us"
] | text-to-image | 2023-09-28T03:39:29Z | ---
license: mit
datasets:
- CyberHarem/hoto_kokoa_istheorderarabbit
pipeline_tag: text-to-image
tags:
- art
---
# Lora of hoto_kokoa_istheorderarabbit
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 8680, you need to download `8680/hoto_kokoa_istheorderarabbit.pt` as the embedding and `8680/hoto_kokoa_istheorderarabbit.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 8680**, with the score of 0.863. The trigger words are:
1. `hoto_kokoa_istheorderarabbit`
2. `orange_hair, blush, hair_ornament, smile, hairclip, purple_eyes, bangs, closed_mouth, indoors, short_hair, brown_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9300 | 0.807 | [Download](9300/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9300/previews/nude.png) | [<NSFW, click to see>](9300/previews/nude2.png) |  |  |
| **8680** | **0.863** | [**Download**](8680/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8680/previews/nude.png) | [<NSFW, click to see>](8680/previews/nude2.png) |  |  |
| 8060 | 0.857 | [Download](8060/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8060/previews/nude.png) | [<NSFW, click to see>](8060/previews/nude2.png) |  |  |
| 7440 | 0.855 | [Download](7440/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7440/previews/nude.png) | [<NSFW, click to see>](7440/previews/nude2.png) |  |  |
| 6820 | 0.831 | [Download](6820/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6820/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6820/previews/nude.png) | [<NSFW, click to see>](6820/previews/nude2.png) |  |  |
| 6200 | 0.847 | [Download](6200/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6200/previews/nude.png) | [<NSFW, click to see>](6200/previews/nude2.png) |  |  |
| 5580 | 0.821 | [Download](5580/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5580/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5580/previews/nude.png) | [<NSFW, click to see>](5580/previews/nude2.png) |  |  |
| 4960 | 0.837 | [Download](4960/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4960/previews/nude.png) | [<NSFW, click to see>](4960/previews/nude2.png) |  |  |
| 4340 | 0.816 | [Download](4340/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4340/previews/nude.png) | [<NSFW, click to see>](4340/previews/nude2.png) |  |  |
| 3720 | 0.810 | [Download](3720/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3720/previews/nude.png) | [<NSFW, click to see>](3720/previews/nude2.png) |  |  |
| 3100 | 0.815 | [Download](3100/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3100/previews/nude.png) | [<NSFW, click to see>](3100/previews/nude2.png) |  |  |
| 2480 | 0.774 | [Download](2480/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2480/previews/nude.png) | [<NSFW, click to see>](2480/previews/nude2.png) |  |  |
| 1860 | 0.798 | [Download](1860/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1860/previews/nude.png) | [<NSFW, click to see>](1860/previews/nude2.png) |  |  |
| 1240 | 0.814 | [Download](1240/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1240/previews/nude.png) | [<NSFW, click to see>](1240/previews/nude2.png) |  |  |
| 620 | 0.722 | [Download](620/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](620/previews/nude.png) | [<NSFW, click to see>](620/previews/nude2.png) |  |  |
|
PHL99/Reinforce-Pixelcopter-PLE-v0 | PHL99 | 2023-09-29T19:19:25Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-08T22:31:30Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 24.60 +/- 13.43
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
abdelrahmanelo/Honadf | abdelrahmanelo | 2023-09-29T19:19:15Z | 0 | 0 | allennlp | [
"allennlp",
"art",
"text-classification",
"ar",
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-09-29T19:16:01Z | ---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- ar
metrics:
- accuracy
library_name: allennlp
pipeline_tag: text-classification
tags:
- art
--- |
anzorq/m2m100_418M_ft_ru-kbd_63K | anzorq | 2023-09-29T19:18:24Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"ru",
"zu",
"dataset:anzorq/ru-kbd",
"base_model:facebook/m2m100_418M",
"base_model:finetune:facebook/m2m100_418M",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-29T19:14:12Z | ---
language:
- ru
- zu
license: mit
base_model: facebook/m2m100_418M
tags:
- generated_from_trainer
datasets:
- anzorq/ru-kbd
model-index:
- name: m2m100_418M_ft_ru-kbd_63K
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M_ft_ru-kbd_63K
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the anzorq/ru-kbd dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 56
- eval_batch_size: 56
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
akashicmarga/Mistral-7B-Instruct-v0.1-q4f16_1-metal | akashicmarga | 2023-09-29T19:17:13Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-09-29T18:37:49Z | ---
license: apache-2.0
---
The model in this repository utilizes Mistral-7B-Instruct-v0.1 (https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), the mlc-llm (https://llm.mlc.ai/docs/) Metal version with 4-bit quantization and an embedding layer for MLC embedding. You have the option to use the FastAPI server instead of OpenAI to run the model locally. For using in langchain, please refer to the sample_langchain.py file in the following GitHub link: https://github.com/mlc-ai/mlc-llm/blob/main/examples/rest/python/sample_langchain.py.
Environment setup
conda create -n mlc-chat-venv -c mlc-ai -c conda-forge mlc-chat-cli-nightly
conda activate mlc-chat-venv
Fast API Server
python -m mlc_chat.rest --model Mistral-7B-Instruct-v0.1-q4f16_1/ --lib-path Mistral-7B-Instruct-v0.1-q4f16_1/Mistral-7B-Instruct-v0.1-q4f16_1-metal.so
|
dyaminda/pneumonia-classification-02 | dyaminda | 2023-09-29T19:11:48Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"alexnet",
"image-classification",
"generated_from_trainer",
"custom_code",
"autotrain_compatible",
"region:us"
] | image-classification | 2023-09-28T19:56:20Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: pneumonia-classification-02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pneumonia-classification-02
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1321
- Accuracy: 0.9474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 50
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4043 | 0.99 | 52 | 0.3141 | 0.8747 |
| 0.2279 | 2.0 | 105 | 0.1656 | 0.9439 |
| 0.1707 | 2.99 | 157 | 0.1481 | 0.9332 |
| 0.1691 | 4.0 | 210 | 0.1305 | 0.9570 |
| 0.1337 | 4.95 | 260 | 0.1244 | 0.9475 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
pamelapaolacb/pruebaModeloTFM_DistilBert_in | pamelapaolacb | 2023-09-29T18:54:58Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-cased-distilled-squad",
"base_model:finetune:distilbert/distilbert-base-cased-distilled-squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-09-29T14:55:16Z | ---
license: apache-2.0
base_model: distilbert-base-cased-distilled-squad
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: pruebaModeloTFM_DistilBert_in
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pruebaModeloTFM_DistilBert_in
This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
roa7n/gpt2-human_nontata_promoters-randomized_10_layers_0.003_lr_2_e | roa7n | 2023-09-29T18:48:57Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-29T18:48:54Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
shahidul034/Medical_Llama_2 | shahidul034 | 2023-09-29T18:14:05Z | 4 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-29T17:57:00Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
```
import torch
from peft import PeftModel
import transformers
import textwrap
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
from transformers.generation.utils import GreedySearchDecoderOnlyOutput
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
DEVICE
tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
model = LlamaForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b-hf",
load_in_8bit=True,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "my-llm", torch_dtype=torch.float16)
model.config.pad_token_id = tokenizer.pad_token_id = 0 # unk
model.config.bos_token_id = 1
model.config.eos_token_id = 2
model = model.eval()
model = torch.compile(model)
PROMPT_TEMPLATE = f"""
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
[INSTRUCTION]
### Response:
"""
def create_prompt(instruction: str) -> str:
return PROMPT_TEMPLATE.replace("[INSTRUCTION]", instruction)
print(create_prompt("What is (are) Glaucoma ?"))
def generate_response(prompt: str, model: PeftModel) -> GreedySearchDecoderOnlyOutput:
encoding = tokenizer(prompt, return_tensors="pt")
input_ids = encoding["input_ids"].to(DEVICE)
generation_config = GenerationConfig(
temperature=0.1,
top_p=0.75,
repetition_penalty=1.1,
)
with torch.inference_mode():
return model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256,
)
def format_response(response: GreedySearchDecoderOnlyOutput) -> str:
decoded_output = tokenizer.decode(response.sequences[0])
response = decoded_output.split("### Response:")[1].strip()
return "\n".join(textwrap.wrap(response))
def ask_alpaca(prompt: str, model: PeftModel = model) -> str:
prompt = create_prompt(prompt)
response = generate_response(prompt, model)
print(format_response(response))
ask_alpaca("What is (are) Glaucoma ?")
```
```
autotrain llm --train --project_name my-llm --model meta-llama/Llama-2-7b-hf --data_path "data" --train_split "train" --text_column "text" --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 10 --num_train_epochs 3 --trainer sft
--use_flash_attention_2
```
https://www.mlexpert.io/machine-learning/tutorials/alpaca-and-llama-inference
|
LemTenku/sister-Bee | LemTenku | 2023-09-29T18:10:39Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2306.02707",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-29T17:30:06Z | ---
license: apache-2.0
pipeline_tag: text-generation
language:
- en
library_name: transformers
---
Change from Synthia-7B-v1.2 -> Synthia-7B-v1.3: Base model was changed from LLaMA-2-7B to Mistral-7B-v0.1
All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia.
To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
```
Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
```
# Synthia-7B-v1.3
SynthIA (Synthetic Intelligent Agent) 7B-v1.3 is a Mistral-7B-v0.1 model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
<br>

<br>
<br>
#### License Disclaimer:
This model is released under Apache 2.0, and comes with no warranty or gurantees of any kind.
<br>
## Evaluation
We evaluated Synthia-7B-v1.3 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|0.6237|
|*hellaswag*|acc_norm|0.8349|
|*mmlu*|acc_norm|0.6232|
|*truthfulqa_mc*|mc2|0.5125|
|**Total Average**|-|**0.6485**||
<br>
## Example Usage
### Here is prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
ASSISTANT:
```
### Below shows a code example on how to use this model:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Synthia-7B-v1.3"
output_file_path = "./Synthia-7B-conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
<br>
### Citiation:
Please kindly cite using the following BibTeX:
```
@misc{Synthia-7B-v1.3,
author = {Migel Tissera},
title = {Synthia-7B-v1.3: Synthetic Intelligent Agent},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://huggingface.co/migtissera/Synthia-13B},
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
osiria/distiluse-base-italian | osiria | 2023-09-29T18:07:35Z | 127 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"feature-extraction",
"it",
"arxiv:1907.04307",
"arxiv:2010.05609",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-06-11T21:23:41Z | ---
license: apache-2.0
language:
- it
---
--------------------------------------------------------------------------------------------------
<body>
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;"> Model: DistilUSE</span>
<br>
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;"> Lang: IT</span>
<br>
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
</body>
--------------------------------------------------------------------------------------------------
<h3>Model description</h3>
This is a <b>Universal Sentence Encoder</b> <b>[1]</b> model for the <b>Italian</b> language, obtained using <b>mDistilUSE</b> ([distiluse-base-multilingual-cased-v1](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1)) as a starting point and focusing it on the Italian language by modifying the embedding layer
(as in <b>[2]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset)
The resulting model has 67M parameters, a vocabulary of 30.785 tokens, and a size of ~270 MB.
It can be used to encode Italian texts and compute similarities between them.
<h3>Quick usage</h3>
```python
from transformers import AutoTokenizer, AutoModel
import numpy as np
tokenizer = AutoTokenizer.from_pretrained("osiria/distiluse-base-italian")
model = AutoModel.from_pretrained("osiria/distiluse-base-italian")
text1 = "Alessandro Manzoni è stato uno scrittore italiano"
text2 = "Giacomo Leopardi è stato un poeta italiano"
vec1 = model(tokenizer.encode(text1, return_tensors = "pt")).last_hidden_state[0,0,:].cpu().detach().numpy()
vec2 = model(tokenizer.encode(text2, return_tensors = "pt")).last_hidden_state[0,0,:].cpu().detach().numpy()
cosine_similarity = np.dot(vec1, vec2)/(np.linalg.norm(vec1)*np.linalg.norm(vec2))
print("COSINE SIMILARITY:", cosine_similarity)
# COSINE SIMILARITY: 0.734292
```
<h3>References</h3>
[1] https://arxiv.org/abs/1907.04307
[2] https://arxiv.org/abs/2010.05609
<h3>License</h3>
The model is released under <b>Apache-2.0</b> license
|
osiria/diablo-italian-base-1.3b | osiria | 2023-09-29T18:07:22Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xglm",
"text-generation",
"it",
"arxiv:2005.14165",
"arxiv:2112.10668",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-29T20:32:54Z | ---
license: mit
language:
- it
pipeline_tag: text-generation
---
--------------------------------------------------------------------------------------------------
<body>
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;"> Model: DIABLO 1.3B 🔥</span>
<br>
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;"> Lang: IT</span>
<br>
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
</body>
--------------------------------------------------------------------------------------------------
<h3>Model description</h3>
This model is a <b>causal</b> language model for the <b>Italian</b> language, based on a GPT-like <b>[1]</b> architecture (more specifically, the model has been obtained by modifying Meta's XGLM architecture <b>[2]</b> and exploiting its 1.7B checkpoint).
The model has ~1.3B parameters and a vocabulary of 50.335 tokens. It is a foundation model, pre-trained for causal language modeling, so it is mainly suitable for basic natural language generation, and you will have to fine-tune it in order to use it on more specific downstream tasks.
<h3>Quick usage</h3>
In order to use the model for inference on GPU, the following pipeline is needed:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("osiria/diablo-italian-base-1.3b")
model = AutoModelForCausalLM.from_pretrained("osiria/diablo-italian-base-1.3b", torch_dtype=torch.float16)
device = torch.device("cuda")
model = model.to(device)
pipeline_nlg = pipeline("text-generation", model = model, tokenizer = tokenizer, device = 0)
pipeline_nlg("Ciao, mi chiamo Marco Rossi e")
# [{'generated_text': 'Ciao, mi chiamo Marco Rossi e sono un blogger italiano.'}]
```
<h3>Limitations</h3>
The model might behave erratically when presented with prompts which are too far away from its pre-training and, because of the probabilistic nature of its generation, it might occasionally produce biased or offensive content with respect to gender, race, ideologies, and political or religious beliefs.
These limitations imply that the model and its outputs should be used with caution, and should not be involved in situations that require the generated text to be fair or true.
<h3>References</h3>
[1] https://arxiv.org/abs/2005.14165
[2] https://arxiv.org/abs/2112.10668
<h3>License</h3>
The model is released under <b>MIT</b> license |
hemanth11/q-FrozenLake-v1-4x4-noSlippery | hemanth11 | 2023-09-29T17:59:36Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-29T17:52:41Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="hemanth11/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
actionpace/13B-Thorns-l2 | actionpace | 2023-09-29T17:49:47Z | 1 | 0 | null | [
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-07T18:38:21Z | ---
license: other
language:
- en
---
**Some of my own quants:**
* 13B-Thorns-l2_Q4_K_M.gguf
* 13B-Thorns-l2_Q5_K_M.gguf
**Source:** [CalderaAI](https://huggingface.co/CalderaAI)
**Source Model:** [13B-Thorns-l2](https://huggingface.co/CalderaAI/13B-Thorns-l2)
**Source models for CalderaAI/13B-Thorns-l2 (Merge)**
- [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) ([Ref](https://huggingface.co/actionpace/Nous-Hermes-Llama2-13b))
- [elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2) ([Ref](https://huggingface.co/actionpace/chronos-13b-v2))
- [garage-bAInd/Platypus2-13B](https://huggingface.co/garage-bAInd/Platypus2-13B) ([Ref](https://huggingface.co/actionpace/Platypus2-13B))
- [jondurbin/airoboros-l2-13b-gpt4-1.4.1](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-1.4.1)
- [KoboldAI/LLAMA2-13B-Holodeck-1](https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1) ([Ref](https://huggingface.co/actionpace/LLAMA2-13B-Holodeck-1))
- [nRuaif/Kimiko-v2-13B](https://huggingface.co/nRuaif/Kimiko-v2-13B) (Lora)
- [lemonilia/limarp-llama2](https://huggingface.co/lemonilia/limarp-llama2) (Lora)
|
asmaa1/videomae-base-groub17-18-finetuned-SLT-subset | asmaa1 | 2023-09-29T17:49:20Z | 64 | 0 | transformers | [
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2023-09-29T06:03:36Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-groub17-18-finetuned-SLT-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-groub17-18-finetuned-SLT-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1433
- Accuracy: 0.175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.8905 | 0.12 | 10 | 3.6811 | 0.05 |
| 3.7833 | 1.12 | 20 | 3.6286 | 0.125 |
| 3.6803 | 2.12 | 30 | 3.5702 | 0.175 |
| 3.5952 | 3.12 | 40 | 3.4705 | 0.15 |
| 3.4882 | 4.12 | 50 | 3.3508 | 0.2 |
| 3.3776 | 5.12 | 60 | 3.2593 | 0.175 |
| 3.2462 | 6.12 | 70 | 3.1780 | 0.2 |
| 3.1493 | 7.12 | 80 | 3.1433 | 0.175 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
AparnaMahajan/Llama2_custom | AparnaMahajan | 2023-09-29T17:49:08Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-29T17:49:07Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
asmaa1/videomae-base-groub19-20-finetuned-SLT-subset | asmaa1 | 2023-09-29T17:44:00Z | 61 | 0 | transformers | [
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2023-09-29T06:19:30Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-groub19-20-finetuned-SLT-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-groub19-20-finetuned-SLT-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1970
- Accuracy: 0.1220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.853 | 0.14 | 11 | 3.6435 | 0.0732 |
| 3.7412 | 1.14 | 22 | 3.5800 | 0.0732 |
| 3.7045 | 2.14 | 33 | 3.4833 | 0.1220 |
| 3.487 | 3.14 | 44 | 3.3655 | 0.1220 |
| 3.4174 | 4.14 | 55 | 3.2769 | 0.1220 |
| 3.3735 | 5.14 | 66 | 3.2278 | 0.1220 |
| 3.3319 | 6.14 | 77 | 3.1988 | 0.1220 |
| 3.1906 | 7.04 | 80 | 3.1970 | 0.1220 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
ArneL2206/a2c-PandaReachDense-v2 | ArneL2206 | 2023-09-29T17:43:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-22T19:24:08Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.17 +/- 0.37
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687) |
jupitercoder/my_sample_peft_model | jupitercoder | 2023-09-29T17:24:48Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-29T17:24:46Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
adutchscotsman/ppo-Huggy | adutchscotsman | 2023-09-29T17:11:04Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-09-29T17:10:55Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: adutchscotsman/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
espnet/msk_lrs3_train_avsr_avhubert_large_extracted_en_bpe1000 | espnet | 2023-09-29T16:58:13Z | 3 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:lrs3",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2023-09-29T16:28:05Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- lrs3
license: cc-by-4.0
---
## ESPnet2 AVSR model
### `espnet/msk_lrs3_train_avsr_avhubert_large_extracted_en_bpe1000`
This model was trained by ms-dot-k using lrs3 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
pip install -e .
cd egs2/lrs3/avsr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/msk_lrs3_train_avsr_avhubert_large_extracted_en_bpe1000
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Thu Sep 28 23:59:06 KST 2023`
- python version: `3.8.18 (default, Sep 11 2023, 13:40:15) [GCC 11.2.0]`
- espnet version: `espnet 202308`
- pytorch version: `pytorch 1.12.0`
- Git hash: `5d0758e2a7063b82d1f10a8ac2de98eb6cf8a352`
- Commit date: `Wed Aug 30 18:03:42 2023 -0400`
## exp/asr_train_avsr_avhubert_large_extracted_en_bpe1000
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave/test|1321|9890|98.5|1.1|0.4|0.2|1.7|8.8|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave/test|1321|49750|99.4|0.2|0.4|0.2|0.8|8.8|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave/test|1321|14940|98.8|0.8|0.4|0.3|1.5|8.8|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_avsr_avhubert_large.yaml
print_config: false
log_level: INFO
drop_last_iter: false
dry_run: false
iterator_type: sequence
valid_iterator_type: null
output_dir: exp/asr_train_avsr_avhubert_large_extracted_en_bpe1000
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 54927
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 20
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_extracted_en_bpe1000/train/speech_shape
- exp/asr_stats_extracted_en_bpe1000/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_extracted_en_bpe1000/valid/speech_shape
- exp/asr_stats_extracted_en_bpe1000/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 800
- 150
sort_in_batch: descending
shuffle_within_batch: false
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - dump/extracted/train/feats.scp
- speech
- kaldi_ark
- - dump/extracted/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/extracted/val/feats.scp
- speech
- kaldi_ark
- - dump/extracted/val/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.0003
scheduler: warmuplr
scheduler_conf:
warmup_steps: 8000
token_list:
- <blank>
- <unk>
- S
- ▁THE
- ▁TO
- ▁A
- ▁AND
- T
- ▁I
- ''''
- ▁OF
- ▁THAT
- ▁IN
- ING
- D
- ▁YOU
- ▁WE
- E
- ▁IT
- N
- ED
- ▁IS
- R
- M
- P
- Y
- ▁FOR
- ER
- ▁THIS
- ▁WAS
- RE
- C
- G
- ▁SO
- A
- ▁BE
- ▁THEY
- ▁HAVE
- ▁ARE
- O
- ▁
- ▁ON
- ▁WITH
- LY
- ▁WHAT
- U
- IN
- AL
- ▁MY
- I
- ▁S
- ▁DO
- B
- ▁RE
- L
- ▁ME
- ▁CAN
- ▁BUT
- LE
- ▁ABOUT
- OR
- ▁NOT
- VE
- F
- AR
- RA
- ▁ALL
- ▁OUR
- ▁PEOPLE
- ▁AT
- ▁C
- ▁AS
- IC
- ▁OR
- ▁LIKE
- W
- LL
- K
- ▁AN
- ▁THERE
- ENT
- ▁ONE
- ES
- ▁HE
- RI
- 'ON'
- ▁P
- ▁IF
- ▁FROM
- ▁JUST
- ▁WHEN
- TH
- ▁YOUR
- ▁US
- CE
- ▁DE
- ION
- IT
- ▁KNOW
- ▁HOW
- ▁T
- ▁BECAUSE
- CH
- V
- ▁OUT
- ▁B
- ▁UP
- ▁E
- ▁F
- TE
- ▁HAD
- ▁CO
- LI
- ▁TIME
- ▁THEIR
- ▁MORE
- UR
- ▁WHO
- ▁GO
- EN
- ▁G
- ATION
- AN
- CK
- TER
- ▁SEE
- ▁WOULD
- ▁THESE
- ▁NO
- ▁THEM
- ▁BY
- ▁THINK
- ▁WERE
- IL
- ATE
- ▁GET
- ▁SE
- ▁VERY
- ▁GOING
- ▁EX
- ▁REALLY
- ITY
- ▁WAY
- ▁CON
- H
- RO
- ▁DON
- ▁NOW
- ▁W
- X
- NE
- GE
- ▁WILL
- ▁MAKE
- ▁WANT
- ▁OTHER
- ▁SOME
- LA
- ▁WORLD
- ▁ST
- ▁COULD
- TION
- ▁WORK
- MENT
- ▁SHE
- ▁NEED
- ▁PA
- LO
- OL
- ▁SAY
- ▁MO
- ▁BA
- IST
- ▁FA
- IR
- ▁MA
- ERS
- ▁HAS
- VER
- ▁PO
- IVE
- ▁PRO
- ▁LIFE
- ▁INTO
- ▁WHICH
- ▁THINGS
- ▁WHERE
- ND
- ▁LA
- MP
- ▁BEEN
- ▁SOMETHING
- MA
- ▁THOSE
- US
- ▁NEW
- ▁CH
- ▁RA
- ▁ACTUALLY
- ▁YEARS
- ▁EVEN
- ▁TAKE
- ▁LOOK
- UL
- ▁RIGHT
- ▁SAID
- TIC
- ▁UN
- Z
- AS
- ▁DAY
- ▁HER
- IDE
- ▁BO
- ▁THAN
- ▁HERE
- ▁OVER
- ▁BACK
- ▁LO
- ▁FIRST
- ▁DI
- ▁MOST
- ▁COME
- ▁ALSO
- VI
- KE
- ▁WELL
- IES
- ABLE
- UT
- ▁THEN
- ▁CHANGE
- AGE
- ▁MUCH
- '0'
- ▁MEAN
- OM
- ▁CA
- CO
- AT
- ▁ANY
- ▁HAPPEN
- ▁ONLY
- ▁PART
- ▁SU
- ▁HIS
- ▁SP
- ▁DIS
- ANCE
- ID
- ▁MANY
- ▁RO
- '}'
- ▁{
- OW
- ▁O
- IGHT
- ▁GOOD
- UM
- ▁LIVE
- ▁LOT
- ▁D
- ▁TWO
- ▁LI
- ▁THING
- ▁GOT
- ▁TELL
- AC
- ▁EVERY
- EL
- CI
- ▁WHY
- TA
- FUL
- ▁BEING
- ANT
- EST
- ▁LEARN
- ▁COMP
- ▁DID
- URE
- PE
- ▁FEEL
- ▁DIFFERENT
- ▁PRE
- MO
- TI
- ▁HO
- ▁K
- ▁LITTLE
- IV
- ▁THROUGH
- ▁1
- INE
- ▁KIND
- ME
- RY
- ▁LET
- ▁HELP
- UN
- ICAL
- ▁VI
- ▁SAME
- ECT
- ▁HUMAN
- ▁GIVE
- HE
- ▁TALK
- ▁FE
- ▁HA
- ▁OWN
- ▁AROUND
- ▁USE
- IS
- ALLY
- ▁IDEA
- RESS
- ▁PROBLEM
- ▁PERSON
- ▁TE
- ▁FI
- ▁FIND
- ▁SA
- ▁START
- OS
- TED
- ▁BU
- LG
- NCE
- ATED
- ▁YEAR
- ▁DIDN
- ▁LOVE
- HO
- '5'
- ▁DOWN
- ▁SCHOOL
- ▁TODAY
- ▁QUESTION
- ▁HEAR
- DI
- ▁MAN
- ▁CAR
- MI
- ▁GREAT
- ▁CR
- ▁DOING
- IG
- ▁FACT
- ▁LE
- ▁LONG
- OUS
- ▁RU
- ▁PUT
- ▁AFTER
- ▁EN
- ▁M
- ▁GA
- ▁SHOW
- OP
- ▁SI
- ▁SHOULD
- ▁NE
- ▁STA
- ▁NEVER
- ▁BIG
- NS
- ▁THOUGHT
- ISH
- ▁MIGHT
- ▁ACT
- ▁PLACE
- LU
- END
- IZE
- ▁REAL
- ▁BETTER
- ATIVE
- IA
- ▁UNDERSTAND
- ▁POWER
- ▁IMPORTANT
- IAN
- ▁BRAIN
- ▁SYSTEM
- UAL
- NESS
- ▁END
- ▁ABLE
- ▁BEFORE
- ▁STORY
- ▁OFF
- TOR
- FF
- ▁STARTED
- ▁DR
- ▁MADE
- ▁ASK
- NA
- ▁HU
- ▁CREATE
- ATING
- ▁BI
- ARY
- ▁HIGH
- ▁HIM
- BO
- ITION
- ▁THREE
- ▁PER
- ▁AM
- ▁CALLED
- ▁APP
- ▁CAME
- ▁WOMEN
- ▁OLD
- TY
- ▁PLAY
- '4'
- PP
- ▁PH
- AG
- ▁BELIEVE
- ▁HOME
- ARD
- ▁FRIEND
- ▁RI
- ▁FOUND
- HA
- ▁HAND
- ▁DA
- ▁STILL
- ▁NA
- ▁WORD
- ▁TRANS
- ▁HEALTH
- OUND
- ▁BUILD
- ▁CARE
- ▁WI
- ▁NEXT
- ▁THANK
- ▁TURN
- ▁TOGETHER
- ▁TA
- ▁BECOME
- ▁EXPERIENCE
- VING
- ▁EM
- ▁MEN
- ISE
- ▁MAR
- ▁EACH
- ▁WENT
- ▁TRI
- ▁POINT
- ▁LAST
- ▁MAYBE
- ▁EVER
- ▁CALL
- WARD
- ▁CHILDREN
- ▁DOES
- CA
- ▁BIT
- UC
- LIC
- UGH
- ▁EXAMPLE
- ▁FEW
- ITIES
- ▁ANOTHER
- SH
- ▁TH
- ▁ALWAYS
- ▁H
- ▁READ
- ▁INTEREST
- FORM
- ▁STATE
- ▁MOVE
- IOUS
- ▁MIND
- 'NO'
- AM
- ▁TEACH
- ▁2
- ▁HARD
- ▁WANTED
- ▁20
- ▁GROW
- ▁JOB
- DA
- ▁TOO
- ▁VA
- OME
- ▁MAY
- '8'
- ▁SOCIAL
- ▁HI
- ▁FOOD
- BI
- ▁JO
- ▁COURSE
- ▁FR
- BA
- ▁MOMENT
- ▁AGAIN
- ▁DOESN
- ▁SHARE
- ▁AWAY
- ▁BETWEEN
- ▁LESS
- ▁SHA
- ▁MONEY
- ▁UNDER
- BER
- ▁DEVELOP
- ▁SECOND
- ▁NUMBER
- ▁ART
- QUE
- ▁FAMILY
- '1'
- '7'
- ▁SH
- '6'
- ▁EVERYTHING
- ▁FAR
- ▁WORKING
- ▁KIDS
- ▁PLAN
- ▁CHA
- ▁AGO
- ▁PI
- ▁ENOUGH
- ISM
- ▁AMERICA
- ▁THINKING
- ▁USED
- ▁REASON
- ▁TRY
- ▁SOMEONE
- ▁GENE
- ▁CU
- ▁STUDENT
- ▁TOLD
- ▁GU
- ▁TRYING
- ▁LEAD
- ▁MYSELF
- ▁BEST
- ▁FUTURE
- ▁MILLION
- ▁SMALL
- ▁TECHNOLOGY
- LESS
- ▁PASS
- ▁DONE
- ▁YOUNG
- '9'
- ▁SPACE
- ▁WATER
- ▁MATTER
- ▁OPEN
- ▁COUNTRY
- ▁REMEMBER
- ▁TALKING
- ▁REALIZE
- LAND
- ▁RESEARCH
- Q
- IAL
- ▁WAR
- ▁GROUP
- ▁BOOK
- ▁KEEP
- ▁DEF
- ▁STOP
- ▁HOPE
- ▁CONNECT
- ▁SENSE
- ▁ANSWER
- ▁WALK
- ▁DESIGN
- ▁WEEK
- ▁LANGUAGE
- ▁DATA
- ▁LOOKING
- ▁PERCENT
- ADE
- ▁CLASS
- ▁WHOLE
- ▁BODY
- ▁FOUR
- ▁OFTEN
- ▁ELSE
- ▁WITHOUT
- ▁PROCESS
- ▁FREE
- ▁MAKING
- IBLE
- ▁BRING
- ▁CHILD
- ▁GETTING
- ▁PROBABLY
- ▁ALLOW
- ▁SPEAK
- ▁COMMUNITY
- ▁HAVING
- ▁TOOK
- ▁OP
- ▁JU
- ▁MU
- ▁FACE
- ▁INFORMATION
- ABILITY
- ▁NAME
- ▁NI
- '2'
- ▁GIRL
- ▁CELL
- ▁ANYTHING
- ▁SCIENCE
- ▁STAND
- ▁WHILE
- ▁SUCH
- '000'
- ▁CASE
- J
- ANG
- ▁FIVE
- ▁GUY
- ▁FUN
- ▁BUSINESS
- ▁ROOM
- ▁SELF
- ▁LIVING
- ▁SURE
- ▁IMAGINE
- ▁ASKED
- ▁MIS
- ▁ENERGY
- ▁PROJECT
- ▁STUDY
- ▁DREAM
- ▁10
- ▁STORIES
- ▁ALREADY
- ▁TERM
- ▁EFFECT
- ▁KNEW
- ▁SOCIETY
- ▁PRODUCT
- ▁PRETTY
- ▁EVERYONE
- ▁HEAD
- ▁19
- ▁JA
- ▁LIGHT
- ▁LISTEN
- ▁MUSIC
- ▁LARGE
- ▁QUITE
- ▁J
- ▁BOTH
- ▁CHALLENGE
- ▁SORT
- ▁FELT
- ▁TREAT
- ▁EDUCATION
- ▁WRONG
- ▁YOURSELF
- ▁MIL
- ▁OURSELVES
- ▁SOUND
- ▁PROGRAM
- ▁3
- ▁CLOSE
- ▁QUA
- ▁SINGLE
- ▁MINUTE
- ▁NOTHING
- ▁ENVIRONMENT
- ▁PUBLIC
- ▁ORDER
- ▁OB
- ▁TRUE
- ▁STEP
- ▁WONDER
- ▁NIGHT
- ▁YET
- ▁EYE
- ▁LEFT
- SHIP
- ▁VALUE
- ▁WHETHER
- ▁MOTHER
- ▁SIMPLE
- ▁NU
- ▁WOMAN
- ▁LU
- ▁CONTROL
- ▁COMING
- ▁SAW
- ▁LEVEL
- ▁TEST
- ▁POSSIBLE
- ▁ACROSS
- ▁HOUSE
- ▁WATCH
- ▁GOVERNMENT
- ▁PARENTS
- ▁HALF
- ▁TEN
- ▁DEEP
- ▁CANCER
- ▁ISSUE
- ▁LATER
- ▁SOMETIMES
- ▁ANIMAL
- ▁SUPPORT
- ▁EAT
- ▁CULTURE
- ▁FULL
- ▁INSTEAD
- ▁EARTH
- ▁DISEASE
- ▁MIN
- ▁GAME
- ▁DECIDED
- ▁ALMOST
- ▁SUCCESS
- ▁AMAZING
- ▁DRIVE
- ▁DU
- ▁EMOTION
- ▁GLOBAL
- ▁EQU
- ▁PLANET
- ▁CERTAIN
- ▁HISTORY
- ▁MEET
- ▁TRAIN
- ▁COMPUTER
- ▁BECAME
- ▁TEAM
- ▁DISCOVER
- ▁DIFFERENCE
- WAY
- ▁FOCUS
- ▁PAST
- ▁RESULT
- ▁MONTHS
- ▁MODEL
- ▁YES
- ▁VO
- ▁COUNTRIES
- ▁STUFF
- ▁FIGURE
- ▁30
- ▁PATIENT
- ▁SPEND
- ▁ENTIRE
- ▁INDIVIDUAL
- ▁UNTIL
- ▁THOUGH
- ▁DECISION
- ▁CHOICE
- ▁AFRICA
- ▁RELATIONSHIP
- ▁BREAK
- ▁SOMEBODY
- ▁FOLLOW
- ▁CONVERSATION
- ▁LEAVE
- ▁THOUSAND
- ▁SIGN
- ▁SINCE
- ▁DIFFICULT
- ▁IMPACT
- ▁HOURS
- ▁COUPLE
- ▁CAUSE
- ▁PARTICULAR
- ▁DOCTOR
- ▁TAKING
- ▁COMPANY
- ▁EVERYBODY
- ▁50
- ▁DIRECT
- ▁EXPECT
- ▁200
- ▁ORGAN
- ▁EXACTLY
- ▁THEMSELVES
- ▁HAPPY
- ▁MUST
- ▁SAFE
- ▁BASED
- ▁BEAUTIFUL
- ▁PHONE
- ▁AGAINST
- ▁WRITE
- ▁DRUG
- ▁PICTURE
- ▁MEDIA
- ▁WAIT
- ▁FRONT
- ▁RISK
- ▁BEHAVIOR
- ▁BLACK
- ▁100
- ▁NATURE
- ▁ORGANIZATION
- ▁HUNDRED
- ▁EASY
- ▁ACCESS
- ▁HOLD
- ▁COMMON
- ▁MARKET
- ▁GRAND
- ▁VOICE
- ▁DEATH
- ▁PIECE
- ▁BILLION
- ▁LEAST
- ▁DURING
- '3'
- ▁NATURAL
- ▁TYPE
- ▁INVEST
- ▁GENERATION
- ENCY
- ▁STRONG
- OLOGICAL
- ▁CLEAR
- ▁PRESENT
- ▁INTERNET
- ▁KILL
- OLOGY
- ▁SUPER
- ▁UNITED
- ▁IMAGE
- ▁RATHER
- ▁SOLUTION
- ▁ECONOMIC
- ▁PROTECT
- ▁BEHIND
- ▁COLLECT
- ▁SCIENTIST
- UDE
- ▁PRODUCE
- ▁PERFECT
- ▁DOLLARS
- ▁VIEW
- ▁CONSIDER
- ▁THIRD
- ▁MACHINE
- ▁OUTSIDE
- ▁SKILL
- ▁EXPERIMENT
- ▁COLLEGE
- ▁QUI
- ▁OPPORTUNITY
- ▁LOCAL
- ▁SIMPLY
- ▁EARLY
- ▁MAJOR
- ▁CANNOT
- ▁PHYSICAL
- ▁WHATEVER
- ▁MIDDLE
- ▁VIDEO
- ▁ALONG
- OGRAPH
- ▁SOLVE
- ▁KEY
- ▁TRUST
- ▁FIELD
- HOOD
- ▁ATTENTION
- ▁MICRO
- ▁SHORT
- ▁SITUATION
- ▁STREET
- ▁COMPANIES
- ▁POLITICAL
- ▁NORMAL
- ▁AMOUNT
- ▁SERVICE
- ▁OBJECT
- ▁POTENTIAL
- ▁COLOR
- ▁KNOWLEDGE
- ▁MORNING
- ▁TRUTH
- ▁UNIVERSITY
- ▁PROVIDE
- ▁RESOURCE
- ▁POSITIVE
- ▁EUROPE
- ▁SPECIAL
- ▁CONTINUE
- ▁BASICALLY
- ▁SMART
- ▁PRACTICE
- ▁POPULATION
- ▁TRAVEL
- ▁AFFECT
- ▁FINALLY
- ▁APPROACH
- ▁COUNT
- ▁PERHAPS
- ▁INTERACT
- ▁EXPLAIN
- ▁ENGINEER
- ▁ENGAGE
- ▁SITTING
- ▁OFFICE
- ▁COMPLEX
- ▁WHITE
- ▁GENDER
- ▁MESSAGE
- ▁WORTH
- ▁ITSELF
- IZATION
- ▁BUILT
- ▁IMPROVE
- ▁OKAY
- ▁PRISON
- ▁MATERIAL
- ▁NETWORK
- ▁EITHER
- ▁GIVING
- ▁LIMIT
- ▁MEASURE
- ▁DARK
- ▁AUDIENCE
- ▁ACCEPT
- ▁RECORD
- ▁OCEAN
- ▁CHOOSE
- ▁SPECIES
- ▁YORK
- ▁SUSTAIN
- ▁SLEEP
- ▁OBVIOUS
- ▁HOSPITAL
- ▁PERSPECTIVE
- ▁INCREASE
- ▁OPERA
- ▁TAUGHT
- ▁MULTI
- ▁CHANGING
- ▁JOURNEY
- ▁INDUSTRY
- ▁NEURO
- ▁REQUIRE
- ▁DECADE
- ▁CURRENT
- ▁PUSH
- ▁BENEFIT
- ▁YEAH
- ▁BLOOD
- ▁SCALE
- ▁ESPECIALLY
- ▁COMMUNITIES
- ▁ADULT
- ▁CHARACTER
- ▁REPRESENT
- IFIED
- ▁SUFFER
- ▁RECOGNIZE
- ▁CENTURY
- ▁SUDDEN
- ▁FUNCTION
- ▁ACHIEVE
- ▁SIMILAR
- ▁BROUGHT
- ▁TRADITION
- ▁UNIVERSE
- ▁CLIMATE
- ▁BREATH
- ▁EXTREME
- ▁REPORT
- ▁DAUGHTER
- ▁COMFORT
- ▁CONCEPT
- ▁ECONOMY
- ▁INNOVATION
- ▁QUICKLY
- ▁SUGGEST
- ▁SPECIFIC
- ▁CRAZY
- ▁CONSCIOUS
- ▁SPREAD
- ▁TRULY
- '{'
- <sos/eos>
init: xavier_uniform
input_size: 2048
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: null
frontend_conf: {}
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_extracted_en_bpe1000/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: avhubert
encoder_conf:
avhubert_url: https://dl.fbaipublicfiles.com/avhubert/model/lrs3_vox/noise-pretrain/large_vox_iter5.pt
avhubert_dir_path: ./local/pre-trained
encoder_embed_dim: 1024
encoder_attention_heads: 16
encoder_ffn_embed_dim: 4096
encoder_layers: 24
dropout: 0.1
dropout_features: 0.1
encoder_layerdrop: 0.05
attention_dropout: 0.1
extracted: true
freeze_finetune_updates: 10000
feature_grad_mult: 1.0
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 4096
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202308'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mingto/whisper-small-hi | mingto | 2023-09-29T16:53:57Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-09-29T12:05:50Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4266
- Wer: 33.1457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0823 | 2.44 | 1000 | 0.2954 | 34.8895 |
| 0.0203 | 4.89 | 2000 | 0.3472 | 33.7763 |
| 0.0018 | 7.33 | 3000 | 0.4013 | 33.0399 |
| 0.0005 | 9.78 | 4000 | 0.4266 | 33.1457 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.14.0
|
navradio/swin-tiny-patch4-window7-224-finetuned-200k | navradio | 2023-09-29T16:52:45Z | 213 | 0 | transformers | [
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-09-29T15:25:52Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-200k
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.796086508753862
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-200k
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4347
- Accuracy: 0.7961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.634 | 0.99 | 36 | 0.6243 | 0.6262 |
| 0.5551 | 1.99 | 72 | 0.5186 | 0.7250 |
| 0.5183 | 2.98 | 108 | 0.4826 | 0.7673 |
| 0.4854 | 4.0 | 145 | 0.5640 | 0.7261 |
| 0.4645 | 4.99 | 181 | 0.4598 | 0.7817 |
| 0.4655 | 5.99 | 217 | 0.4787 | 0.7786 |
| 0.4582 | 6.98 | 253 | 0.4483 | 0.7899 |
| 0.4415 | 8.0 | 290 | 0.4709 | 0.7765 |
| 0.4546 | 8.99 | 326 | 0.4717 | 0.7817 |
| 0.4566 | 9.99 | 362 | 0.4538 | 0.7951 |
| 0.4675 | 10.98 | 398 | 0.4491 | 0.7817 |
| 0.4449 | 12.0 | 435 | 0.4992 | 0.7652 |
| 0.4349 | 12.99 | 471 | 0.4627 | 0.7817 |
| 0.4253 | 13.99 | 507 | 0.4492 | 0.7858 |
| 0.4278 | 14.98 | 543 | 0.4442 | 0.7951 |
| 0.4567 | 16.0 | 580 | 0.4362 | 0.7899 |
| 0.4205 | 16.99 | 616 | 0.4550 | 0.7889 |
| 0.4233 | 17.99 | 652 | 0.4336 | 0.7909 |
| 0.4014 | 18.98 | 688 | 0.4565 | 0.7889 |
| 0.4176 | 20.0 | 725 | 0.4323 | 0.7940 |
| 0.411 | 20.99 | 761 | 0.4348 | 0.7951 |
| 0.4128 | 21.99 | 797 | 0.4378 | 0.7971 |
| 0.4045 | 22.98 | 833 | 0.4317 | 0.7951 |
| 0.4001 | 24.0 | 870 | 0.4452 | 0.7868 |
| 0.4061 | 24.99 | 906 | 0.4286 | 0.7920 |
| 0.4033 | 25.99 | 942 | 0.4306 | 0.7951 |
| 0.3953 | 26.98 | 978 | 0.4320 | 0.7920 |
| 0.3924 | 28.0 | 1015 | 0.4338 | 0.7940 |
| 0.4056 | 28.99 | 1051 | 0.4329 | 0.7930 |
| 0.4032 | 29.79 | 1080 | 0.4347 | 0.7961 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
dracero/a2c-PandaReachDense-v3 | dracero | 2023-09-29T16:51:27Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-29T16:45:58Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kbooth-insight/booth-test | kbooth-insight | 2023-09-29T16:51:26Z | 29 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-09-29T16:46:18Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### booth-test Dreambooth model trained by kbooth-insight with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
language-ml-lab/postagger-azb | language-ml-lab | 2023-09-29T16:41:02Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"az",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-09-26T16:29:59Z | ---
pipeline_tag: token-classification
widget:
- text: سن نجورسن؟
example_title: Example 1
- text: من سنی سویرم.
example_title: Example 2
- text: سن شاهین قیزین چوخ سئویرسن.
example_title: Example 3
- text: آلما آلیب گلرم، سن هئچ بیر شی آلما.
example_title: Example 4
language:
- az
metrics:
- accuracy
- f1
---
# POS Tagger
- Type: Fine-tuned BERT-based Part-of-Speech (POS) tagging model
- Description: This model has been fine-tuned using [AzerBERT](https://huggingface.co/language-ml-lab/AzerBert) for part-of-speech tagging tasks in Iranian Azerbaijani text. It can be used to annotate text with 11 POS tags, which is essential for various downstream NLP applications.
## How to use
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="language-ml-lab/postagger-azb")
```
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("language-ml-lab/postagger-azb")
model = AutoModelForTokenClassification.from_pretrained("language-ml-lab/postagger-azb")
``` |
Ranjit/test_2 | Ranjit | 2023-09-29T16:40:48Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:AmazonScience/massive",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-29T16:40:23Z | ---
base_model: xxxxxxxxx
tags:
- generated_from_trainer
datasets:
- AmazonScience/massive
metrics:
- f1
model-index:
- name: massive_indo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# massive_indo
This model is a fine-tuned version of [xxxxxxxxx](https://huggingface.co/xxxxxxxxx) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6866
- F1: 0.8161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.0824 | 0.11 | 2000 | 1.6825 | 0.3184 |
| 1.2059 | 0.22 | 4000 | 1.1052 | 0.5593 |
| 0.8955 | 0.33 | 6000 | 0.8835 | 0.6588 |
| 0.7748 | 0.44 | 8000 | 0.8215 | 0.6894 |
| 0.6839 | 0.54 | 10000 | 0.7765 | 0.7234 |
| 0.6299 | 0.65 | 12000 | 0.7514 | 0.7600 |
| 0.5778 | 0.76 | 14000 | 0.6906 | 0.7707 |
| 0.533 | 0.87 | 16000 | 0.6867 | 0.7771 |
| 0.4877 | 0.98 | 18000 | 0.6850 | 0.7861 |
| 0.4114 | 1.09 | 20000 | 0.6757 | 0.7907 |
| 0.3815 | 1.2 | 22000 | 0.6798 | 0.7956 |
| 0.3785 | 1.31 | 24000 | 0.6809 | 0.7987 |
| 0.3645 | 1.42 | 26000 | 0.6739 | 0.8033 |
| 0.3347 | 1.53 | 28000 | 0.6768 | 0.8037 |
| 0.3345 | 1.63 | 30000 | 0.6457 | 0.8087 |
| 0.3254 | 1.74 | 32000 | 0.6721 | 0.8055 |
| 0.3131 | 1.85 | 34000 | 0.6542 | 0.8125 |
| 0.3072 | 1.96 | 36000 | 0.6652 | 0.8070 |
| 0.2343 | 2.07 | 38000 | 0.6754 | 0.8143 |
| 0.2323 | 2.18 | 40000 | 0.6790 | 0.8167 |
| 0.232 | 2.29 | 42000 | 0.6967 | 0.8101 |
| 0.2171 | 2.4 | 44000 | 0.6999 | 0.8116 |
| 0.215 | 2.51 | 46000 | 0.6927 | 0.8095 |
| 0.2136 | 2.62 | 48000 | 0.6917 | 0.8155 |
| 0.2008 | 2.72 | 50000 | 0.6837 | 0.8137 |
| 0.1997 | 2.83 | 52000 | 0.6925 | 0.8140 |
| 0.1926 | 2.94 | 54000 | 0.6866 | 0.8161 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
RsGoksel/Breast-Tumor-Mass-Detection | RsGoksel | 2023-09-29T16:38:12Z | 0 | 0 | null | [
"Cancer",
"Tumour",
"Breast",
"Mammography",
"Mass",
"object-detection",
"license:apache-2.0",
"region:us"
] | object-detection | 2023-09-29T16:14:39Z | ---
license: apache-2.0
pipeline_tag: object-detection
tags:
- Cancer
- Tumour
- Breast
- Mammography
- Mass
---
## Introduction
The Breast Mass Object Detection Model is designed to detect breast masses in mammography.
- **Developed by:** https://github.com/RsGoksel
### More Tools
- **Repository:** https://github.com/RsGoksel/Breast-Tissue-Cropper-Tools |
RogerB/afro-xlmr-large-kinyarwanda-finetuned-kinyarwanda-tweets-finetuned | RogerB | 2023-09-29T16:35:30Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:RogerB/afro-xlmr-large-kinyarwanda-finetuned",
"base_model:finetune:RogerB/afro-xlmr-large-kinyarwanda-finetuned",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-29T16:21:32Z | ---
license: mit
base_model: RogerB/afro-xlmr-large-kinyarwanda-finetuned
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-large-kinyarwanda-finetuned-kinyarwanda-tweets-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-large-kinyarwanda-finetuned-kinyarwanda-tweets-finetuned
This model is a fine-tuned version of [RogerB/afro-xlmr-large-kinyarwanda-finetuned](https://huggingface.co/RogerB/afro-xlmr-large-kinyarwanda-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0292 | 1.0 | 500 | 1.9115 |
| 1.9227 | 2.0 | 1000 | 1.8062 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
twm213/food_classifier | twm213 | 2023-09-29T16:32:47Z | 63 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-09-29T16:16:06Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: twm213/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# twm213/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3748
- Validation Loss: 0.3432
- Train Accuracy: 0.914
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7859 | 1.6483 | 0.799 | 0 |
| 1.2220 | 0.9133 | 0.842 | 1 |
| 0.7054 | 0.5449 | 0.898 | 2 |
| 0.4945 | 0.4446 | 0.892 | 3 |
| 0.3748 | 0.3432 | 0.914 | 4 |
### Framework versions
- Transformers 4.33.3
- TensorFlow 2.9.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
roa7n/gpt2-human_nontata_promoters-randomized_9_layers_0.0003_lr_8_e | roa7n | 2023-09-29T16:27:12Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-29T16:27:10Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
RsGoksel/Breast-Mammography-Detection | RsGoksel | 2023-09-29T16:26:12Z | 0 | 0 | null | [
"Breast",
"Mammography",
"ROI",
"Medical",
"image-classification",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-09-28T09:45:16Z | ---
license: apache-2.0
pipeline_tag: image-classification
tags:
- Breast
- Mammography
- ROI
- Medical
---
# Breast Tissue ROI Object Detection Model
## Introduction:
The Breast Tissue ROI Object Detection Model is designed to locate regions of interest (ROIs) within mammographic images.
### 1. Purpose
The primary purpose of the Breast Tissue ROI Object Detection Model is to accurately and efficiently identify regions of interest in mammographic images. These regions typically contain suspicious lesions, calcifications, or abnormalities that require further examination to determine the presence of breast cancer.
## 2. Deep Learning Architecture:
This model is built on state-of-the-art deep learning architecture, leveraging Convolutional Neural Networks (CNNs) (with feature extraction). It utilizes a combination of convolutional layers, pooling layers, and fully connected layers to process mammographic images effectively.
- **Developed by:** https://github.com/RsGoksel
- **Model type:** Pytorch (.pt)
### More Tools
- **Repository:** https://github.com/RsGoksel/Breast-Tissue-Cropper-Tools
 |
TheBloke/NexusRaven-13B-GPTQ | TheBloke | 2023-09-29T16:18:51Z | 30 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2308.12950",
"base_model:Nexusflow/NexusRaven-13B",
"base_model:quantized:Nexusflow/NexusRaven-13B",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2023-09-28T23:00:55Z | ---
base_model: Nexusflow/NexusRaven-13B
inference: false
license: llama2
model-index:
- name: NexusRaven-13B
results: []
model_creator: Nexusflow
model_name: Nexusraven 13B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Nexusraven 13B - GPTQ
- Model creator: [Nexusflow](https://huggingface.co/Nexusflow)
- Original model: [Nexusraven 13B](https://huggingface.co/Nexusflow/NexusRaven-13B)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Nexusflow's Nexusraven 13B](https://huggingface.co/Nexusflow/NexusRaven-13B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/NexusRaven-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/NexusRaven-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/NexusRaven-13B-GGUF)
* [Nexusflow's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Nexusflow/NexusRaven-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/NexusRaven-13B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 16384 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/NexusRaven-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 16384 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/NexusRaven-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 16384 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/NexusRaven-13B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 16384 | 14.55 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/NexusRaven-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 16384 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/NexusRaven-13B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/NexusRaven-13B-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `NexusRaven-13B-GPTQ`:
```shell
mkdir NexusRaven-13B-GPTQ
huggingface-cli download TheBloke/NexusRaven-13B-GPTQ --local-dir NexusRaven-13B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir NexusRaven-13B-GPTQ
huggingface-cli download TheBloke/NexusRaven-13B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir NexusRaven-13B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir NexusRaven-13B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/NexusRaven-13B-GPTQ --local-dir NexusRaven-13B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/NexusRaven-13B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/NexusRaven-13B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/NexusRaven-13B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `NexusRaven-13B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/NexusRaven-13B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Nexusflow's Nexusraven 13B
# NexusRaven-13B: Surpassing the state-of-the-art in open-source function calling LLMs.
<p align="center">
<a href="https://huggingface.co/Nexusflow" target="_blank">Nexusflow HF</a> - <a href="http://nexusflow.ai/blog" target="_blank">NexusRaven blog post</a> - <a href="https://huggingface.co/Nexusflow/NexusRaven-13B" target="_blank">NexusRaven-13B</a> - <a href="https://x.com/NexusflowX/status/1707470614012035561?s=20" target="_blank">NexusRaven-13B Twitter Thread</a> - <a href="https://github.com/nexusflowai/NexusRaven/" target="_blank">NexusRaven-13B Github</a> - <a href="https://huggingface.co/datasets/Nexusflow/NexusRaven_API_evaluation" target="_blank">NexusRaven API evaluation dataset</a>
</p>
<p align="center" width="100%">
<a><img src="NexusRaven.png" alt="NexusRaven" style="width: 40%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Table of contents
- [NexusRaven-13B: Surpassing the state-of-the-art in open-source function calling LLMs.](#nexusraven-13b-surpassing-the-state-of-the-art-in-open-source-function-calling-llms)
- [Introducing NexusRaven-13B](#introducing-nexusraven-13b)
- [NexusRaven model usage](#nexusraven-model-usage)
- [Training procedure](#training-procedure)
- [Training hyperparameters](#training-hyperparameters)
- [Framework versions](#framework-versions)
- [Limitations](#limitations)
- [License](#license)
- [References](#references)
- [Citation](#citation)
- [Contact](#contact)
This model is a fine-tuned version of [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf).
## Introducing NexusRaven-13B
NexusRaven is an open-source and commercially viable function calling LLM that surpasses the state-of-the-art in function calling capabilities.
📊 Performance Highlights: With our demonstration retrieval system, NexusRaven-13B achieves a 95% success rate in using cybersecurity tools such as CVE/CPE Search and VirusTotal, while prompting GPT-4 achieves 64%. It has significantly lower cost and faster inference speed compared to GPT-4.
🔧 Generalization to the Unseen: NexusRaven-13B generalizes to tools never seen during model training, achieving a success rate comparable with GPT-3.5 in zero-shot setting, significantly outperforming all other open-source LLMs of similar sizes.
🔥 Commercially Permissive: The training of NexusRaven-13B does not involve any data generated by proprietary LLMs such as GPT-4. You have full control of the model when deployed in commercial applications.
<p align="center" width="100%">
<a><img src="Retrieval-augmented_Evaluation.png" alt="NexusRaven" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a>
<a><img src="Zero-shot_Evaluation.png" alt="NexusRaven" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## NexusRaven model usage
NexusRaven accepts a list of python functions. These python functions can do anything (including sending GET/POST requests to external APIs!). The two requirements include the python function signature and the appropriate docstring to generate the function call.
NexusRaven is highly compatible with langchain. See [langchain_example.py](https://huggingface.co/Nexusflow/NexusRaven-13B/blob/main/langchain_example.py). An example without langchain can be found in [non_langchain_example.py](https://huggingface.co/Nexusflow/NexusRaven-13B/blob/main/non_langchain_example.py)
Please note that the model will reflect on the answer sometimes, so we highly recommend stopping the model generation at a stopping criteria of `["\nReflection:"]`, to avoid spending unnecessary tokens during inference, but the reflection might help in some rare cases. This is reflected in our langchain example.
The "Initial Answer" can be executed to run the function.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 2.0
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
# Limitations
1. We highly recommend using a stop criteria of `["\nReflection:"]`. The model was trained to first generate an answer and then reflect on its answer to either improve the answer or keep the answer the same. However, this "chain of thought" is often not helpful, and the final answer is seldom better than the initial call. Therefore, we strongly recommend using the Initial Call as the main call to execute.
2. The model works best when it is connected with a retriever when there are a multitude of functions, as a large number of functions will saturate the context window of this model.
3. The model can be prone to generate incorrect calls. Please ensure proper guardrails to capture errant behavior is in place.
## License
This model was trained on commercially viable data and is licensed under the [Llama 2 community license](https://huggingface.co/codellama/CodeLlama-13b-hf/blob/main/LICENSE) following the original [CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf/) model.
## References
We thank the CodeLlama team for their amazing models!
```
@misc{rozière2023code,
title={Code Llama: Open Foundation Models for Code},
author={Baptiste Rozière and Jonas Gehring and Fabian Gloeckle and Sten Sootla and Itai Gat and Xiaoqing Ellen Tan and Yossi Adi and Jingyu Liu and Tal Remez and Jérémy Rapin and Artyom Kozhevnikov and Ivan Evtimov and Joanna Bitton and Manish Bhatt and Cristian Canton Ferrer and Aaron Grattafiori and Wenhan Xiong and Alexandre Défossez and Jade Copet and Faisal Azhar and Hugo Touvron and Louis Martin and Nicolas Usunier and Thomas Scialom and Gabriel Synnaeve},
year={2023},
eprint={2308.12950},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Citation
```
@misc{nexusraven,
title={NexusRaven: Surpassing the state-of-the-art in open-source function calling LLMs},
author={Nexusflow.ai team},
year={2023},
url={http://nexusflow.ai/blog}
}
```
## Contact
Please reach out to [email protected] for any questions!
|
alexisdpc/my_awesome_billsum_model | alexisdpc | 2023-09-29T16:15:54Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-29T10:47:45Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1391
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5516
- Rouge1: 0.1391
- Rouge2: 0.0508
- Rougel: 0.1154
- Rougelsum: 0.1155
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8459 | 0.1294 | 0.0382 | 0.1079 | 0.1077 | 19.0 |
| No log | 2.0 | 124 | 2.6321 | 0.139 | 0.0494 | 0.1153 | 0.1152 | 19.0 |
| No log | 3.0 | 186 | 2.5683 | 0.1369 | 0.0484 | 0.1133 | 0.1133 | 19.0 |
| No log | 4.0 | 248 | 2.5516 | 0.1391 | 0.0508 | 0.1154 | 0.1155 | 19.0 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kaifahmad/wav2vec2-large-xls-r-300m-tr-colab | kaifahmad | 2023-09-29T15:59:50Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-09-28T11:44:08Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-tr-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 0.3005821672964968
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3889
- Wer: 0.3006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.8274 | 3.67 | 400 | 0.6752 | 0.6946 |
| 0.4002 | 7.34 | 800 | 0.4440 | 0.5183 |
| 0.1961 | 11.01 | 1200 | 0.4133 | 0.4052 |
| 0.1285 | 14.68 | 1600 | 0.4249 | 0.3737 |
| 0.0966 | 18.35 | 2000 | 0.4019 | 0.3606 |
| 0.0789 | 22.02 | 2400 | 0.4019 | 0.3316 |
| 0.0599 | 25.69 | 2800 | 0.3996 | 0.3078 |
| 0.047 | 29.36 | 3200 | 0.3889 | 0.3006 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
reginaboateng/finnal_compacter_Bioasq_adapter | reginaboateng | 2023-09-29T15:33:32Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"adapterhub:biaoasq",
"dataset:bioasq7b",
"region:us"
] | null | 2023-09-29T15:33:30Z | ---
tags:
- bert
- adapterhub:biaoasq
- adapter-transformers
datasets:
- bioasq7b
---
# Adapter `reginaboateng/finnal_compacter_Bioasq_adapter` for allenai/scibert_scivocab_uncased
An [adapter](https://adapterhub.ml) for the `allenai/scibert_scivocab_uncased` model that was trained on the [biaoasq](https://adapterhub.ml/explore/biaoasq/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("allenai/scibert_scivocab_uncased")
adapter_name = model.load_adapter("reginaboateng/finnal_compacter_Bioasq_adapter", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
chats-bug/llama-2-13b-email-subject-finetuned | chats-bug | 2023-09-29T15:13:23Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-28T10:17:57Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
rasta/distilbert-base-uncased-finetuned-fashion | rasta | 2023-09-29T15:03:55Z | 112 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-09T07:49:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-finetuned-fashion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-fashion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a munally created dataset in order to detect fashion (label_0) from non-fashion (label_1) items.
It achieves the following results on the evaluation set:
- Loss: 0.0809
- Accuracy: 0.98
- F1: 0.9801
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4017 | 1.0 | 47 | 0.1220 | 0.966 | 0.9662 |
| 0.115 | 2.0 | 94 | 0.0809 | 0.98 | 0.9801 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
malex1701d/llama2-7b-chat-hf-primutec | malex1701d | 2023-09-29T14:57:07Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:malex1701d/primutec_info_20",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-29T14:32:18Z | ---
datasets:
- malex1701d/primutec_info_20
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [NousResearch/Llama-2-7b-chat-hf]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RogerB/afro-xlmr-large-kinyarwanda-finetuned | RogerB | 2023-09-29T14:57:02Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:Davlan/afro-xlmr-large",
"base_model:finetune:Davlan/afro-xlmr-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-28T09:56:43Z | ---
license: mit
base_model: Davlan/afro-xlmr-large
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-large-kinyarwanda-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-large-kinyarwanda-finetuned
This model is a fine-tuned version of [Davlan/afro-xlmr-large](https://huggingface.co/Davlan/afro-xlmr-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3557 | 1.0 | 1250 | 1.2004 |
| 1.2352 | 2.0 | 2500 | 1.1377 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
openaccess-ai-collective/tiny-mistral | openaccess-ai-collective | 2023-09-29T14:50:37Z | 17,213 | 12 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-28T15:10:32Z | mistral architecture model, randomly initialized. useful for e2e testing. |
gokuls/HBERTv1_emb_compress_48_L10_H512_A8 | gokuls | 2023-09-29T14:49:50Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"dataset:gokuls/wiki_book_corpus_complete_processed_bert_dataset",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-27T06:39:46Z | ---
tags:
- generated_from_trainer
datasets:
- gokuls/wiki_book_corpus_complete_processed_bert_dataset
metrics:
- accuracy
model-index:
- name: HBERTv1_emb_compress_48_L10_H512_A8
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: gokuls/wiki_book_corpus_complete_processed_bert_dataset
type: gokuls/wiki_book_corpus_complete_processed_bert_dataset
metrics:
- name: Accuracy
type: accuracy
value: 0.17367944889882433
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HBERTv1_emb_compress_48_L10_H512_A8
This model is a fine-tuned version of [](https://huggingface.co/) on the gokuls/wiki_book_corpus_complete_processed_bert_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7680
- Accuracy: 0.1737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 56
- eval_batch_size: 56
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 7.1035 | 0.1 | 10000 | 7.0837 | 0.0844 |
| 6.6799 | 0.19 | 20000 | 6.6737 | 0.1072 |
| 6.5327 | 0.29 | 30000 | 6.5279 | 0.1194 |
| 6.4362 | 0.38 | 40000 | 6.4358 | 0.1272 |
| 6.3648 | 0.48 | 50000 | 6.3700 | 0.1335 |
| 6.3181 | 0.57 | 60000 | 6.3158 | 0.1355 |
| 6.2776 | 0.67 | 70000 | 6.2769 | 0.1380 |
| 6.2469 | 0.76 | 80000 | 6.2438 | 0.1400 |
| 6.218 | 0.86 | 90000 | 6.2187 | 0.1422 |
| 6.2036 | 0.96 | 100000 | 6.1963 | 0.1434 |
| 6.1806 | 1.05 | 110000 | 6.1776 | 0.1451 |
| 6.1591 | 1.15 | 120000 | 6.1621 | 0.1456 |
| 6.1503 | 1.24 | 130000 | 6.1473 | 0.1468 |
| 6.1391 | 1.34 | 140000 | 6.1357 | 0.1466 |
| 6.126 | 1.43 | 150000 | 6.1230 | 0.1477 |
| 6.1145 | 1.53 | 160000 | 6.1133 | 0.1479 |
| 6.1067 | 1.62 | 170000 | 6.1040 | 0.1486 |
| 6.097 | 1.72 | 180000 | 6.0966 | 0.1488 |
| 6.0825 | 1.82 | 190000 | 6.0875 | 0.1492 |
| 6.0783 | 1.91 | 200000 | 6.0797 | 0.1494 |
| 6.0673 | 2.01 | 210000 | 6.0730 | 0.1499 |
| 6.066 | 2.1 | 220000 | 6.0623 | 0.1501 |
| 6.0534 | 2.2 | 230000 | 6.0510 | 0.1504 |
| 6.0004 | 2.29 | 240000 | 5.9972 | 0.1517 |
| 5.9609 | 2.39 | 250000 | 5.9492 | 0.1530 |
| 5.93 | 2.49 | 260000 | 5.9169 | 0.1551 |
| 5.9058 | 2.58 | 270000 | 5.8895 | 0.1571 |
| 5.8834 | 2.68 | 280000 | 5.8618 | 0.1597 |
| 5.8572 | 2.77 | 290000 | 5.8394 | 0.1623 |
| 5.8296 | 2.87 | 300000 | 5.8168 | 0.1661 |
| 5.8085 | 2.96 | 310000 | 5.7926 | 0.1703 |
| 5.7873 | 3.06 | 320000 | 5.7663 | 0.1739 |
### Framework versions
- Transformers 4.33.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Vijish/alphamask | Vijish | 2023-09-29T14:45:05Z | 3 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-09-29T14:00:14Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-Vijish/alphamask
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
|
ProtonH/PPO-LunarLander-v2 | ProtonH | 2023-09-29T14:43:37Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-29T13:27:13Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.45 +/- 17.18
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gokuls/HBERTv1_emb_compress_48_L10_H768_A12 | gokuls | 2023-09-29T14:39:29Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"dataset:gokuls/wiki_book_corpus_complete_processed_bert_dataset",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-27T06:39:55Z | ---
tags:
- generated_from_trainer
datasets:
- gokuls/wiki_book_corpus_complete_processed_bert_dataset
metrics:
- accuracy
model-index:
- name: HBERTv1_emb_compress_48_L10_H768_A12
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: gokuls/wiki_book_corpus_complete_processed_bert_dataset
type: gokuls/wiki_book_corpus_complete_processed_bert_dataset
metrics:
- name: Accuracy
type: accuracy
value: 0.3705453911691882
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HBERTv1_emb_compress_48_L10_H768_A12
This model is a fine-tuned version of [](https://huggingface.co/) on the gokuls/wiki_book_corpus_complete_processed_bert_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1748
- Accuracy: 0.3705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 7.1074 | 0.08 | 10000 | 7.0838 | 0.0828 |
| 6.6784 | 0.16 | 20000 | 6.6795 | 0.1075 |
| 6.535 | 0.25 | 30000 | 6.5322 | 0.1192 |
| 6.4482 | 0.33 | 40000 | 6.4390 | 0.1267 |
| 6.3716 | 0.41 | 50000 | 6.3711 | 0.1324 |
| 6.3233 | 0.49 | 60000 | 6.3219 | 0.1351 |
| 6.2821 | 0.57 | 70000 | 6.2781 | 0.1383 |
| 6.251 | 0.66 | 80000 | 6.2431 | 0.1408 |
| 6.2159 | 0.74 | 90000 | 6.2111 | 0.1425 |
| 6.1838 | 0.82 | 100000 | 6.1774 | 0.1444 |
| 6.1338 | 0.9 | 110000 | 6.1349 | 0.1464 |
| 6.1022 | 0.98 | 120000 | 6.0939 | 0.1481 |
| 6.0194 | 1.07 | 130000 | 6.0080 | 0.1517 |
| 5.9309 | 1.15 | 140000 | 5.9199 | 0.1642 |
| 5.8593 | 1.23 | 150000 | 5.8326 | 0.1769 |
| 5.7093 | 1.31 | 160000 | 5.6659 | 0.2040 |
| 5.5018 | 1.39 | 170000 | 5.4433 | 0.2339 |
| 5.3036 | 1.47 | 180000 | 5.2292 | 0.2576 |
| 5.0629 | 1.56 | 190000 | 4.9895 | 0.2834 |
| 4.8311 | 1.64 | 200000 | 4.7638 | 0.3085 |
| 4.6239 | 1.72 | 210000 | 4.5799 | 0.3278 |
| 4.4305 | 1.8 | 220000 | 4.3821 | 0.3471 |
| 4.2209 | 1.88 | 230000 | 4.1749 | 0.3704 |
### Framework versions
- Transformers 4.33.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.13.3
|
roa7n/gpt2-human_nontata_promoters-randomized_9_layers_0.003_lr_8_e | roa7n | 2023-09-29T14:38:09Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-29T14:38:06Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
chats-bug/alabala_test | chats-bug | 2023-09-29T14:32:35Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-29T14:14:40Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
tiiuae/falcon-40b-instruct | tiiuae | 2023-09-29T14:32:27Z | 132,750 | 1,173 | transformers | [
"transformers",
"pytorch",
"falcon",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"arxiv:2304.01196",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-25T10:14:36Z | ---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
---
# ✨ Falcon-40B-Instruct
**Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) and finetuned on a mixture of [Baize](https://github.com/project-baize/baize-chatbot). It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-40B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).**
* **Falcon-40B is the best open-source model available.** It outperforms [LLaMA](https://github.com/facebookresearch/llama), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT](https://huggingface.co/mosaicml/mpt-7b), etc. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).
💸 **Looking for a smaller, less expensive model?** [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) is Falcon-40B-Instruct's little brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-40b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 85-100GB of memory** to swiftly run inference with Falcon-40B.
# Model Card for Falcon-40B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-40B-Instruct has been finetuned on a chat dataset.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-40B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-40B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-40b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-40B-Instruct was finetuned on a 150M tokens from [Bai ze](https://github.com/project-baize/baize-chatbot) mixed with 5% of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) data.
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
## Technical Specifications
For more information about pretraining, see [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).
### Model Architecture and Objective
Falcon-40B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 60 | |
| `d_model` | 8192 | |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-40B-Instruct was trained on AWS SageMaker, on 64 A100 40GB GPUs in P4d instances.
#### Software
Falcon-40B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
To cite the [Baize](https://github.com/project-baize/baize-chatbot) instruction dataset used for this model:
```
@article{xu2023baize,
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
journal={arXiv preprint arXiv:2304.01196},
year={2023}
}
```
## License
Falcon-40B-Instruct is made available under the Apache 2.0 license.
## Contact
[email protected] |
gianpag/dbooth | gianpag | 2023-09-29T14:26:23Z | 3 | 2 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2023-09-28T13:10:13Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Professional linkedin headshot photo
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
gokuls/HBERTv1_emb_compress_48_L12_H256_A4 | gokuls | 2023-09-29T14:24:30Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"dataset:gokuls/wiki_book_corpus_complete_processed_bert_dataset",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-26T17:53:04Z | ---
tags:
- generated_from_trainer
datasets:
- gokuls/wiki_book_corpus_complete_processed_bert_dataset
metrics:
- accuracy
model-index:
- name: HBERTv1_emb_compress_48_L12_H256_A4
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: gokuls/wiki_book_corpus_complete_processed_bert_dataset
type: gokuls/wiki_book_corpus_complete_processed_bert_dataset
metrics:
- name: Accuracy
type: accuracy
value: 0.15102291312237043
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HBERTv1_emb_compress_48_L12_H256_A4
This model is a fine-tuned version of [](https://huggingface.co/) on the gokuls/wiki_book_corpus_complete_processed_bert_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0478
- Accuracy: 0.1510
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 7.1159 | 0.11 | 10000 | 7.0948 | 0.0805 |
| 6.698 | 0.22 | 20000 | 6.6913 | 0.1060 |
| 6.5481 | 0.33 | 30000 | 6.5473 | 0.1167 |
| 6.4589 | 0.44 | 40000 | 6.4576 | 0.1252 |
| 6.3925 | 0.55 | 50000 | 6.3858 | 0.1306 |
| 6.3433 | 0.66 | 60000 | 6.3356 | 0.1353 |
| 6.2983 | 0.76 | 70000 | 6.2965 | 0.1376 |
| 6.268 | 0.87 | 80000 | 6.2643 | 0.1397 |
| 6.2359 | 0.98 | 90000 | 6.2381 | 0.1411 |
| 6.2186 | 1.09 | 100000 | 6.2160 | 0.1429 |
| 6.1915 | 1.2 | 110000 | 6.1972 | 0.1439 |
| 6.1811 | 1.31 | 120000 | 6.1834 | 0.1440 |
| 6.1696 | 1.42 | 130000 | 6.1692 | 0.1455 |
| 6.1621 | 1.53 | 140000 | 6.1557 | 0.1454 |
| 6.1417 | 1.64 | 150000 | 6.1466 | 0.1468 |
| 6.1391 | 1.75 | 160000 | 6.1364 | 0.1466 |
| 6.1338 | 1.86 | 170000 | 6.1281 | 0.1476 |
| 6.1285 | 1.97 | 180000 | 6.1200 | 0.1477 |
| 6.1147 | 2.08 | 190000 | 6.1135 | 0.1483 |
| 6.1139 | 2.18 | 200000 | 6.1083 | 0.1486 |
| 6.1004 | 2.29 | 210000 | 6.1004 | 0.1487 |
| 6.0997 | 2.4 | 220000 | 6.0964 | 0.1489 |
| 6.092 | 2.51 | 230000 | 6.0922 | 0.1490 |
| 6.089 | 2.62 | 240000 | 6.0862 | 0.1490 |
| 6.0841 | 2.73 | 250000 | 6.0829 | 0.1498 |
| 6.0847 | 2.84 | 260000 | 6.0799 | 0.1496 |
| 6.0834 | 2.95 | 270000 | 6.0760 | 0.1501 |
| 6.0752 | 3.06 | 280000 | 6.0715 | 0.1502 |
| 6.0693 | 3.17 | 290000 | 6.0697 | 0.1502 |
| 6.0677 | 3.28 | 300000 | 6.0679 | 0.1502 |
| 6.0646 | 3.39 | 310000 | 6.0646 | 0.1503 |
| 6.0625 | 3.5 | 320000 | 6.0623 | 0.1503 |
| 6.0536 | 3.6 | 330000 | 6.0593 | 0.1507 |
| 6.0574 | 3.71 | 340000 | 6.0577 | 0.1507 |
| 6.0496 | 3.82 | 350000 | 6.0560 | 0.1508 |
| 6.0525 | 3.93 | 360000 | 6.0543 | 0.1507 |
| 6.0498 | 4.04 | 370000 | 6.0508 | 0.1509 |
| 6.0557 | 4.15 | 380000 | 6.0509 | 0.1508 |
| 6.0445 | 4.26 | 390000 | 6.0483 | 0.1509 |
| 6.0466 | 4.37 | 400000 | 6.0470 | 0.1510 |
| 6.0507 | 4.48 | 410000 | 6.0471 | 0.1510 |
| 6.0459 | 4.59 | 420000 | 6.0468 | 0.1510 |
### Framework versions
- Transformers 4.33.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jake-walker/ppo-LunarLander-v2 | jake-walker | 2023-09-29T14:23:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-29T14:22:51Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.02 +/- 75.48
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
niklasg/test_emotion_detection_gersti | niklasg | 2023-09-29T14:09:25Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:generator",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-15T15:44:08Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- generator
metrics:
- accuracy
- f1
model-index:
- name: test_emotion_detection_gersti
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: generator
type: generator
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5371057513914657
- name: F1
type: f1
value: 0.14268320711165708
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_emotion_detection_gersti
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6884
- Accuracy: 0.5371
- F1: 0.1427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
IAteSpaghettiForLunch/GLaDOS-AI-main | IAteSpaghettiForLunch | 2023-09-29T14:02:05Z | 0 | 0 | tf-keras | [
"tf-keras",
"conversational",
"license:unknown",
"region:us"
] | text-generation | 2023-09-29T14:00:23Z | ---
license: unknown
pipeline_tag: conversational
--- |
csukuangfj/icefall_asr_aishell_conformer_ctc | csukuangfj | 2023-09-29T13:57:12Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-09-29T12:22:40Z | ---
license: apache-2.0
---
# Introduction
This repo is from
https://huggingface.co/pkufool/icefall_asr_aishell_conformer_ctc |
Irvanaja/Sovits.teio | Irvanaja | 2023-09-29T13:54:52Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-09-29T13:54:52Z | ---
license: bigscience-openrail-m
---
|
milaidy/dannyy | milaidy | 2023-09-29T13:48:05Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-09-29T13:33:58Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### dannyy Dreambooth model trained by milaidy with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
trymtv/speecht5_tts_nps | trymtv | 2023-09-29T13:40:47Z | 74 | 0 | transformers | [
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"no",
"dataset:NbAiLab/NPSC",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-09-29T11:06:25Z | ---
language:
- 'no'
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- NbAiLab/NPSC
model-index:
- name: speecht5_tts_npsc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_tts_npsc
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the NbAiLab/NPSC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5489 | 2.42 | 1000 | 0.5087 |
| 0.5217 | 4.83 | 2000 | 0.4842 |
| 0.5151 | 7.25 | 3000 | 0.4770 |
| 0.5147 | 9.66 | 4000 | 0.4745 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
LeeEric/openbuddy-codellama2-34b-v11.1-GGUF | LeeEric | 2023-09-29T13:34:58Z | 2 | 1 | null | [
"gguf",
"code",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"license:llama2",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-29T08:49:37Z | ---
license: llama2
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
tags:
- code
---
# OpenBuddy CodeLlama2 34B V11.1 - GGUF
- Model creator: [OpenBuddy](https://huggingface.co/OpenBuddy)
- Original model: [OpenBuddy CodeLlama2 34B V11.1](https://huggingface.co/OpenBuddy/openbuddy-codellama2-34b-v11.1-bf16)
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [openbuddy-codellama2-34b-v11.1-Q4_K_M.gguf](https://huggingface.co/LeeEric/openbuddy-codellama2-34b-v11.1-GGUF/blob/main/openbuddy-codellama2-34b-v11.1-Q4_K_M.gguf) | Q4_K_M | 4 | 20.3 GB| 22.8 GB | medium, balanced quality - recommended |
<!-- README_GGUF.md-provided-files end -->
<!-- prompt-template start -->
## Prompt template: OpenBuddy
```
You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User.
Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
You like to use emojis. You can speak fluently in many languages, for example: English, Chinese.
You cannot access the internet, but you have vast knowledge, cutoff: 2021-09.
You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI.
```
<!-- prompt-template end --> |
Yntec/3Danimation | Yntec | 2023-09-29T13:32:47Z | 375 | 10 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Disney",
"3D",
"Lykon",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-09-29T12:47:37Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Disney
- 3D
- Lykon
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
language:
- en
inference: true
---
# 3D Animation Diffusion
Original model page: https://civitai.com/models/118086/3d-animation-diffusion
Sample and prompt:

Cartoon Pretty CUTE Girl, DETAILED CHIBI EYES, ilya kuvshinov detailed legs, gorgeous detailed hair, high school, Magazine ad, iconic, 1949, sharp focus. visible brushstrokes By KlaysMoji and artgerm and Clay Mann and and leyendecker and simon cowell. By Dave Rapoza. Pretty CUTE girl. |
Sumsub/Sumsub-ffs-synthetic-2.0 | Sumsub | 2023-09-29T13:18:16Z | 3 | 6 | generic | [
"generic",
"ai_or_not",
"sumsub",
"image_classification",
"sumsubaiornot",
"aiornot",
"deepfake",
"synthetic",
"generated",
"pytorch",
"image-classification",
"license:cc-by-sa-3.0",
"region:us"
] | image-classification | 2023-09-26T08:22:25Z | ---
library_name: generic
license: cc-by-sa-3.0
pipeline_tag: image-classification
tags:
- ai_or_not
- sumsub
- image_classification
- sumsubaiornot
- aiornot
- deepfake
- synthetic
- generated
- pytorch
metrics:
- accuracy
widget:
- src: >-
https://huggingface.co/Sumsub/Sumsub-ffs-synthetic-2.0/resolve/main/images/2.jpg
example_title: Pope Francis(yellow puffer)
- src: >-
https://huggingface.co/Sumsub/Sumsub-ffs-synthetic-2.0/resolve/main/images/3.jpg
example_title: Pentagon explosion
- src: >-
https://huggingface.co/Sumsub/Sumsub-ffs-synthetic-2.0/resolve/main/images/4.webp
example_title: Trump arrest
---
# For Fake's Sake: a set of models for detecting generated and synthetic images
Many people on the internet have recently been tricked by fake images of Pope Francis wearing a coat or of Donald Trump's arrest.
To help combat this issue, we provide detectors for such images generated by popular tools like Midjourney and Stable Diffusion.
|  |  |  |
|-------------------------|-------------------------|--------------------------|
## Model Details
### Model Description
- **Developed by:** [Sumsub AI team](https://sumsub.com/)
- **Model type:** Image classification
- **License:** CC-By-SA-3.0
- **Types:**
- **Finetuned from model:** *convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384*
## Demo
The demo page can be found [here](https://huggingface.co/spaces/Sumsub/Sumsub-ffs-demo).
## How to Get Started with the Model & Model Sources
Use the code below to get started with the model:
```bash
git lfs install
git clone https://huggingface.co/Sumsub/Sumsub-ffs-synthetic-2.0 sumsub-ffs-synthetic-v2
```
```python
from sumsub-ffs-synthetic-v2.pipeline import PreTrainedPipeline
from PIL import Image
pipe = PreTrainedPipeline("sumsub-ffs-synthetic-v2/")
img = Image.open("sumsub-ffs-synthetic-v2/images/2.jpg")
result = pipe(img)
print(result)
```
You may need these prerequsites installed:
```bash
pip install -r requirements.txt
pip install "git+https://github.com/rwightman/pytorch-image-models"
pip install "git+https://github.com/huggingface/huggingface_hub"
```
## Training Details
### Training Data
The models were trained on the following datasets:
- *Real photos* : [MS COCO](https://cocodataset.org/#home), [VizWiz](https://vizwiz.org/tasks-and-datasets/vqa/).
- *AI photos* : [Midjourney](href='https://pin.it/13UkjgM),[Midjourney AI Art](https://pin.it/6pNXlz3), [Midjourney - Community Showcase](https://pin.it/7gi4jmT), [Midjourney](https://pin.it/4FW0LXQ), [MIDJOURNEY](https://pin.it/5mSsiPg), [Midjourney](https://pin.it/2Qx92QW), [aiornot HuggingFace contest data](https://huggingface.co/datasets/competitions/aiornot), [Stable Diffusion Wordnet Dataset](https://www.kaggle.com/datasets/astoeckl/stable-diffusion-wordnet-dataset).
### Training Procedure
To improve the performance metrics, we used data augmentations such as rotation, crop, Mixup and CutMix. Each model was trained for 30 epochs using early stopping with batch size equal to 32.
## Evaluation
For evaluation we used the following datasets:
**AI photos:**
- [DiffusionDB](https://github.com/poloclub/diffusiondb): a set of 2 million images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
- [Kaggel SD Faces](https://www.kaggle.com/datasets/bwandowando/faces-dataset-using-stable-diffusion-v14): set of 4k human face images generated using Stable Diffusion 1.4.
- [Stable Diffusion Wordnet Dataset](https://www.kaggle.com/datasets/astoeckl/stable-diffusion-wordnet-dataset): set of 200K images generated by Stable Diffusion.
- [Kaggle Midjourney 2022-250k](https://www.kaggle.com/datasets/ldmtwo/midjourney-250k-csv): set of 250k images generated by Midjourney.
- [Kaggle Midjourney v5.1](https://www.kaggle.com/datasets/iraklip/modjourney-v51-cleaned-data): set of 400k images generated by Midjourney version 5.1.
**Realistic photos:**
- [MS COCO](https://cocodataset.org/#home): set of 120k real world images.
- [VizWiz Visual Question Answering dataset validation part](https://vizwiz.org/tasks-and-datasets/vqa/) : set of 20k photos typically stored on individuals' mobile devices.
These images showcase examples of pictures people keep on their phones in their daily lives.
## Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
| Dataset | Accuracy |
|---------------------------------------------------------------------------------------------------------------|----------|
| [Kaggel SD Faces](https://www.kaggle.com/datasets/bwandowando/faces-dataset-using-stable-diffusion-v14) | 0.984 |
| [DiffusionDB](https://github.com/poloclub/diffusiondb) | 0.920 |
| [Stable Diffusion Wordnet Dataset](https://www.kaggle.com/datasets/astoeckl/stable-diffusion-wordnet-dataset) | 0.950 |
| [MS COCO](https://cocodataset.org/#home) | 0.953 |
| [Kaggle Midjourney 2022-250k](https://www.kaggle.com/datasets/ldmtwo/midjourney-250k-csv) | 0.938 |
| [Kaggle Midjourney v5.1](https://www.kaggle.com/datasets/iraklip/modjourney-v51-cleaned-data) | 0.971 |
| [VizWiz Visual Question Answering dataset validation part](https://vizwiz.org/tasks-and-datasets/vqa/) | 0.998 |
## Limitations
- It should be noted that achieving 100% accuracy is not possible. Therefore, the model output should only be used as an indication that an image may have been (but not definitely) artificially generated.
- Our models may face challenges in accurately predicting the class for real-world examples that are extremely vibrant and of exceptionally high quality. In such cases, the richness of colors and fine details may lead to misclassifications due to the complexity of the input. This could potentially cause the model to focus on visual aspects that are not necessarily indicative of the true class.

## Citation
If you find this useful, please cite as:
```text
@misc{sumsubaiornot,
publisher = {Sumsub},
url = {https://huggingface.co/Sumsub/Sumsub-ffs-synthetic-2.0},
year = {2023},
author = {Savelyev, Alexander and Toropov, Alexey and Goldman-Kalaydin, Pavel and Samarin, Alexey},
title = {For Fake's Sake: a set of models for detecting deepfakes, generated images and synthetic images}
}
```
## References
- Stöckl, Andreas. (2022). Evaluating a Synthetic Image Dataset Generated with Stable Diffusion. 10.48550/arXiv.2211.01777.
- Lin, Tsung-Yi & Maire, Michael & Belongie, Serge & Hays, James & Perona, Pietro & Ramanan, Deva & Dollár, Piotr & Zitnick, C.. (2014). Microsoft COCO: Common Objects in Context.
- Howard, Andrew & Zhu, Menglong & Chen, Bo & Kalenichenko, Dmitry & Wang, Weijun & Weyand, Tobias & Andreetto, Marco & Adam, Hartwig. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.
- Liu, Zhuang & Mao, Hanzi & Wu, Chao-Yuan & Feichtenhofer, Christoph & Darrell, Trevor & Xie, Saining. (2022). A ConvNet for the 2020s.
- Wang, Zijie & Montoya, Evan & Munechika, David & Yang, Haoyang & Hoover, Benjamin & Chau, Polo. (2022). DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models. 10.48550/arXiv.2210.14896.
- Danna Gurari & Qing Li & Abigale J. Stangl & Anhong Guo & Chi Lin & Kristen Grauman & Jiebo Luo & Jeffrey P. Bigham (2018): VizWiz Grand Challenge: Answering Visual Questions from Blind People. CVPR 2018 |
Omid-sar/fine-tuning-llama2-7b-qlora-french | Omid-sar | 2023-09-29T13:16:37Z | 6 | 1 | peft | [
"peft",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-09-18T20:44:17Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
Fine-tuning Llama-2-7b using QLoRA in French on Google Colab
## Goal
The goal of this project is to adapt the Llama-2-7b model, which initially might not have proficiency in French, to understand and respond accurately to queries in the French language. This adaptation involves fine-tuning the model on a dataset of French novels, allowing it to comprehend the nuances, syntax, and semantics of the French language. By leveraging the PEFT library from the Hugging Face ecosystem and QLoRA for more memory-efficient fine-tuning on a single T4 GPU provided by Google Colab, we aim to create a chatbot that can effectively answer questions posed in French.
## Overview
This project involves several steps including setting up the environment, loading the dataset and model, configuring QLoRA and training parameters, training the model, and finally testing and pushing the fine-tuned model to Hugging Face.
## Features
- **Dataset Loading**: Load and process a French novels dataset using Hugging Face datasets library.
- **Model Quantization**: Quantize the base Llama-2-7b model into 4-bit using bitsandbytes.
- **Configuration for QLoRA**: Apply the QLoRA configuration for more memory-efficient fine-tuning using the PEFT library.
- **Training**: Use the SFTTrainer from the TRL library for instruction-based fine-tuning.
- **Testing and Pushing to Hugging Face**: Test the fine-tuned model and push it to Hugging Face.
## Prerequisites
- Google Colab with T4 GPU
- Python libraries: trl, transformers, accelerate, peft, datasets, bitsandbytes, einops
- |
Ioana23/mt5-small-finetuned-amazon-en-es | Ioana23 | 2023-09-29T13:12:54Z | 3 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-28T11:53:08Z | ---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_keras_callback
model-index:
- name: Ioana23/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Ioana23/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.7725
- Validation Loss: 3.5472
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 4832, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 12.0296 | 5.3430 | 0 |
| 6.9353 | 4.1188 | 1 |
| 5.9627 | 3.8218 | 2 |
| 5.4505 | 3.6813 | 3 |
| 5.1620 | 3.6219 | 4 |
| 4.9629 | 3.5810 | 5 |
| 4.8520 | 3.5574 | 6 |
| 4.7725 | 3.5472 | 7 |
### Framework versions
- Transformers 4.33.3
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Boray/LLama2SA_Tag3E | Boray | 2023-09-29T13:11:42Z | 0 | 0 | null | [
"conversational",
"tr",
"region:us"
] | text-generation | 2023-09-29T12:36:53Z | ---
language:
- tr
pipeline_tag: conversational
--- |
erkam/sg2im-256-bs-16x2-cc-depth-12k | erkam | 2023-09-29T12:48:00Z | 1 | 0 | diffusers | [
"diffusers",
"sg-to-image",
"scene-graph",
"stable-diffusion",
"stable-diffusion-diffusers",
"lora",
"base_model:stabilityai/stable-diffusion-2",
"base_model:adapter:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-26T10:31:34Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
tags:
- sg-to-image
- scene-graph
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - erkam/sg2im-256-bs-16x2-cc-depth-12k
These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the erkam/clevr-full-v5 dataset. You can find some example images in the following.
|
PPV/FoodImageClassifier | PPV | 2023-09-29T12:42:36Z | 216 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-09-29T12:42:28Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: FoodImageClassifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8936170339584351
---
# FoodImageClassifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Chicken Breast

#### Dosa

#### Guava

#### Idli

#### White Rice
 |
phanerozoic/OpenOrca-Platypus2-13B-PirateLora | phanerozoic | 2023-09-29T12:39:06Z | 0 | 0 | null | [
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-09-26T19:32:17Z | ---
license: cc-by-nc-4.0
language:
- en
---
OpenOrca-Platypus2-13B-PirateLora
This repo contains a Low-Rank Adapter (LoRA) for OpenOrca-Platypus2 13b (16 float) fit on a simple dataset comprised of thousands of pirate phrases, conversation pieces, and obscura. The purpose behind the generation of this lora was to determine whether enforcement of dialect and diction was possible through the LoRa fine tuning method. Results were much better than the previous adapter we created for Llama 2, but this may be a due to a combination of effects: the superior performance of the base model compared to Llama 2, and the higher quality training set as compared to our previous effort. |
alexisdpc/my_awesome_wnut_model | alexisdpc | 2023-09-29T12:30:39Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-09-29T12:05:26Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5716694772344013
- name: Recall
type: recall
value: 0.31417979610750696
- name: F1
type: f1
value: 0.4055023923444976
- name: Accuracy
type: accuracy
value: 0.9413877132230345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2696
- Precision: 0.5717
- Recall: 0.3142
- F1: 0.4055
- Accuracy: 0.9414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2756 | 0.5691 | 0.2632 | 0.3599 | 0.9389 |
| No log | 2.0 | 426 | 0.2696 | 0.5717 | 0.3142 | 0.4055 | 0.9414 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ayoubkirouane/BERT-base_NER-ar | ayoubkirouane | 2023-09-29T12:19:39Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"ar",
"dataset:wikiann",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-09-29T11:24:24Z | ---
datasets:
- wikiann
language:
- ar
pipeline_tag: token-classification
---
## Model Name: BERT-base_NER-ar
### Model Description :
**BERT-base_NER-ar** is a fine-tuned **BERT** multilingual base model for Named Entity Recognition (NER) in Arabic. The base model was pretrained on a diverse set of languages and fine-tuned specifically for the task of NER using the "wikiann" dataset. This model is case-sensitive, distinguishing between different letter cases, such as "english" and "English."
### Dataset
The model was fine-tuned on the **wikiann** dataset, which is a multilingual named entity recognition dataset. It contains Wikipedia articles annotated with three types of named entities: LOC (location), PER (person), and ORG (organization). The annotations are in the IOB2 format. The dataset supports 176 of the 282 languages from the original WikiANN corpus.
### Supported Tasks and Leaderboards
The primary supported task for this model is named entity recognition (NER) in Arabic. However, it can also be used to explore the zero-shot cross-lingual capabilities of multilingual models, allowing for NER in various languages.
### Use Cases
+ **Arabic Named Entity Recognition**: *BERT-base_NER-ar* can be used to extract named entities (such as names of people, locations, and organizations) from Arabic text. This is valuable for information retrieval, text summarization, and content analysis in Arabic language applications.
+ **Multilingual NER**: The model's multilingual capabilities enable it to perform NER in other languages supported by the "wikiann" dataset, making it versatile for cross-lingual NER tasks.
### Limitations
+ **Language Limitation**: While the model supports multiple languages, it may not perform equally well in all of them. Performance could vary depending on the quality and quantity of training data available for specific languages.
+ **Fine-Tuning Data**: The model's performance is dependent on the quality and representativeness of the fine-tuning data (the "wikiann" dataset in this case). If the dataset is limited or biased, it may affect the model's performance.
## Usage :
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
# Load the fine-tuned model
model = AutoModelForTokenClassification.from_pretrained("ayoubkirouane/BERT-base_NER-ar")
tokenizer = AutoTokenizer.from_pretrained("ayoubkirouane/BERT-base_NER-ar")
# Tokenize your input text
text = "عاصمة فلسطين هي القدس الشريف."
tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(text)))
# Convert tokens to input IDs
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# Perform NER inference
with torch.no_grad():
outputs = model(torch.tensor([input_ids]))
# Get the predicted labels for each token
predicted_labels = outputs[0].argmax(dim=2).cpu().numpy()[0]
# Map label IDs to human-readable labels
predicted_labels = [model.config.id2label[label_id] for label_id in predicted_labels]
# Print the tokenized text and its associated labels
for token, label in zip(tokens, predicted_labels):
print(f"Token: {token}, Label: {label}")
``` |
roa7n/gpt2-human_nontata_promoters-randomized_9_layers_0.0003_lr_2_e | roa7n | 2023-09-29T12:16:33Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-29T12:16:31Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
ldos/text_shortening_model_v64 | ldos | 2023-09-29T12:13:37Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-29T11:34:16Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: text_shortening_model_v64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v64
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3622
- Bert precision: 0.7381
- Bert recall: 0.7763
- Bert f1-score: 0.7541
- Average word count: 9.0345
- Max word count: 14
- Min word count: 2
- Average token count: 15.5862
- % shortened texts with length > 12: 20.6897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bert precision | Bert recall | Bert f1-score | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:-----------:|:-------------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 3.1461 | 1.0 | 5 | 2.3622 | 0.7381 | 0.7763 | 0.7541 | 9.0345 | 14 | 2 | 15.5862 | 20.6897 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
duytintruong/ppo-LunarLander-v2 | duytintruong | 2023-09-29T12:11:05Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-29T12:10:41Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.53 +/- 22.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DamarJati/plastic-recycling-codes | DamarJati | 2023-09-29T11:59:46Z | 280 | 2 | transformers | [
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"en",
"dataset:imagefolder",
"dataset:aytvill/plastic-recycling-codes",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-09-29T06:39:18Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
- aytvill/plastic-recycling-codes
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.391304347826087
widget:
- src: >-
https://huggingface.co/DamarJati/plastic-recycling-codes/resolve/main/example/image1.jpg
example_title: image1.jpg
- src: >-
https://huggingface.co/DamarJati/plastic-recycling-codes/resolve/main/example/image2.jpg
example_title: image2.jpg
- src: >-
https://huggingface.co/DamarJati/plastic-recycling-codes/resolve/main/example/image3.jpg
example_title: image3.jpg
language:
- en
pipeline_tag: image-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
More information needed
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-5
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 5 | 1.847501 | 0.260870 |
| 1.9354 | 2.0 | 10 | 1.729485 | 0.333333 |
| 1.9354 | 3.0 | 15 | 1.681863 | 0.391304 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3 |
soBeauty/V2_20230929-9-xlm-roberta-base-new | soBeauty | 2023-09-29T11:54:07Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-29T08:47:47Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V2_20230929-9-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V2_20230929-9-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.4563
- Loss: 2.9802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.2931 | 1.38 | 200 | 0.3023 | 4.0097 |
| 3.8132 | 2.76 | 400 | 0.3169 | 3.9995 |
| 3.6834 | 4.14 | 600 | 0.4007 | 3.3898 |
| 3.4093 | 5.52 | 800 | 0.3776 | 3.2085 |
| 3.2579 | 6.9 | 1000 | 0.4191 | 3.3291 |
| 3.1115 | 8.28 | 1200 | 0.4153 | 3.3472 |
| 3.0367 | 9.66 | 1400 | 0.4351 | 3.0613 |
| 2.8776 | 11.03 | 1600 | 0.4015 | 3.4168 |
| 2.8575 | 12.41 | 1800 | 0.4545 | 2.9002 |
| 2.8635 | 13.79 | 2000 | 0.4563 | 2.9802 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
soBeauty/V2_20230929-8-xlm-roberta-base-new | soBeauty | 2023-09-29T11:38:13Z | 160 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-29T08:35:34Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V2_20230929-8-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V2_20230929-8-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.5333
- Loss: 2.6271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.3526 | 1.38 | 200 | 0.2971 | 3.8765 |
| 3.8293 | 2.76 | 400 | 0.3692 | 3.3059 |
| 3.5091 | 4.14 | 600 | 0.4261 | 3.1166 |
| 3.382 | 5.52 | 800 | 0.4662 | 2.8632 |
| 3.1966 | 6.9 | 1000 | 0.4622 | 2.8866 |
| 3.1158 | 8.28 | 1200 | 0.4588 | 2.8542 |
| 2.9343 | 9.66 | 1400 | 0.4568 | 2.7541 |
| 2.8719 | 11.03 | 1600 | 0.4286 | 2.7540 |
| 2.8378 | 12.41 | 1800 | 0.5074 | 2.6573 |
| 2.8196 | 13.79 | 2000 | 0.5333 | 2.6271 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
tolpem/distilbert-base-uncased-finetuned-imdb | tolpem | 2023-09-29T11:22:48Z | 71 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-29T11:17:44Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: tolpem/distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tolpem/distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8561
- Validation Loss: 2.5781
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8561 | 2.5781 | 0 |
### Framework versions
- Transformers 4.33.3
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
roa7n/gpt2-human_nontata_promoters-randomized_8_layers_3e-05_lr_8_e | roa7n | 2023-09-29T11:11:21Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-29T11:11:19Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
astha789/rare-puppers | astha789 | 2023-09-29T10:55:14Z | 195 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-09-29T10:55:06Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.89552241563797
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
mcparty2/xlm-roberta-base-finetuned-panx-de-fr | mcparty2 | 2023-09-29T10:54:02Z | 124 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-09-29T10:41:36Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1623
- F1: 0.8603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1813 | 0.8232 |
| 0.1482 | 2.0 | 1430 | 0.1586 | 0.8462 |
| 0.0959 | 3.0 | 2145 | 0.1623 | 0.8603 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
hardikcode/distilbert-base-uncased-finetuned-imdb | hardikcode | 2023-09-29T10:53:21Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-29T10:50:07Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7024 | 1.0 | 157 | 2.4968 |
| 2.5794 | 2.0 | 314 | 2.4281 |
| 2.5354 | 3.0 | 471 | 2.4509 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
soBeauty/V2_20230929-5-xlm-roberta-base-new | soBeauty | 2023-09-29T10:51:07Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-29T08:00:28Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V2_20230929-5-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V2_20230929-5-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.5181
- Loss: 2.5292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.3451 | 1.38 | 200 | 0.3686 | 3.5221 |
| 3.8508 | 2.76 | 400 | 0.4402 | 3.2092 |
| 3.5934 | 4.14 | 600 | 0.3908 | 3.4233 |
| 3.1956 | 5.52 | 800 | 0.4317 | 3.3102 |
| 3.2828 | 6.9 | 1000 | 0.4704 | 2.9782 |
| 3.1068 | 8.28 | 1200 | 0.5019 | 2.6751 |
| 2.9976 | 9.66 | 1400 | 0.4493 | 3.0054 |
| 2.9072 | 11.03 | 1600 | 0.4189 | 3.0985 |
| 2.8663 | 12.41 | 1800 | 0.5385 | 2.4444 |
| 2.804 | 13.79 | 2000 | 0.5181 | 2.5292 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
polymonyrks/distilbert-base-uncased-finetuned-emotion | polymonyrks | 2023-09-29T10:46:39Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-30T14:56:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9255688957679862
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2237
- Accuracy: 0.9255
- F1: 0.9256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8556 | 1.0 | 250 | 0.3192 | 0.908 | 0.9055 |
| 0.2538 | 2.0 | 500 | 0.2237 | 0.9255 | 0.9256 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
soBeauty/V2_20230929-4-xlm-roberta-base-new | soBeauty | 2023-09-29T10:36:13Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-29T07:48:49Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V2_20230929-4-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V2_20230929-4-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.4980
- Loss: 2.6341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.4422 | 1.38 | 200 | 0.2888 | 4.2369 |
| 3.9018 | 2.76 | 400 | 0.3333 | 3.9767 |
| 3.5709 | 4.14 | 600 | 0.3669 | 3.5533 |
| 3.3829 | 5.52 | 800 | 0.3891 | 3.3396 |
| 3.2242 | 6.9 | 1000 | 0.4244 | 3.0648 |
| 3.0837 | 8.28 | 1200 | 0.4515 | 3.2200 |
| 2.9448 | 9.66 | 1400 | 0.4637 | 2.8563 |
| 2.8529 | 11.03 | 1600 | 0.4664 | 2.9343 |
| 2.8343 | 12.41 | 1800 | 0.4498 | 3.1041 |
| 2.813 | 13.79 | 2000 | 0.4980 | 2.6341 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
LeoLM/leo-hessianai-13b | LeoLM | 2023-09-29T10:34:48Z | 1,442 | 27 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"en",
"de",
"dataset:oscar-corpus/OSCAR-2301",
"dataset:wikipedia",
"dataset:bjoernp/tagesschau-2018-2023",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-05T22:47:48Z | ---
datasets:
- oscar-corpus/OSCAR-2301
- wikipedia
- bjoernp/tagesschau-2018-2023
language:
- en
- de
library_name: transformers
pipeline_tag: text-generation
---
# LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
Thanks to a compute grant at HessianAI's new supercomputer **42**, we release two foundation models trained with 8k context length,
[`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! 👀).
With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
Read our [blog post]() or our paper (preprint coming soon) for more details!
*A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
## Model Details
- **Finetuned from:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English and German
- **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
- **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:[email protected])
## Use in 🤗Transformers
First install direct dependencies:
```
pip install transformers torch sentencepiece
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn==v2.1.1 --no-build-isolation
pip install git+https://github.com/HazyResearch/[email protected]#subdirectory=csrc/rotary
```
Then load the model in transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
model="LeoLM/leo-hessianai-13b",
device_map="auto",
torch_dtype=torch.float16,
trust_remote_code=True # True for flash-attn2 else False
)
```
## Training parameters

## Benchmarks
 |
pembelajarff/moviereview-ds-mini | pembelajarff | 2023-09-29T10:31:58Z | 61 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-29T10:31:31Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: moviereview-ds-mini
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# moviereview-ds-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.1821
- Validation Loss: 7.8696
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -887, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2500 | 9.5646 | 0 |
| 9.1560 | 8.7719 | 1 |
| 8.1821 | 7.8696 | 2 |
### Framework versions
- Transformers 4.33.3
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
pembelajarff/movie_review | pembelajarff | 2023-09-29T10:30:02Z | 125 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-19T04:24:33Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: pembelajarff/movie_review
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pembelajarff/movie_review
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.1821
- Validation Loss: 7.8696
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -887, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2500 | 9.5646 | 0 |
| 9.1560 | 8.7719 | 1 |
| 8.1821 | 7.8696 | 2 |
### Framework versions
- Transformers 4.33.3
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Thenujan/ViT-H-14 | Thenujan | 2023-09-29T10:28:16Z | 2 | 0 | open_clip | [
"open_clip",
"feature-extraction",
"en",
"license:other",
"region:us"
] | feature-extraction | 2023-08-29T12:51:04Z | ---
license: other
language:
- en
metrics:
- mape
library_name: open_clip
pipeline_tag: feature-extraction
--- |
pavithrav/distilbert-base-uncased-finetuned-emotion | pavithrav | 2023-09-29T10:26:51Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-29T10:26:11Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2215
- Accuracy: 0.9235
- F1: 0.9236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8569 | 1.0 | 250 | 0.3312 | 0.901 | 0.8994 |
| 0.2561 | 2.0 | 500 | 0.2215 | 0.9235 | 0.9236 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Weyaxi/ChatAYT-Lora-Assamble-Marcoroni-v2 | Weyaxi | 2023-09-29T10:22:18Z | 20 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-14T07:43:32Z | <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> |
soBeauty/V2_20230929-3-xlm-roberta-base-new | soBeauty | 2023-09-29T10:21:45Z | 157 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-29T07:37:05Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V2_20230929-3-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V2_20230929-3-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.5378
- Loss: 2.2727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.3145 | 1.38 | 200 | 0.2955 | 3.8793 |
| 3.8469 | 2.76 | 400 | 0.3398 | 3.7082 |
| 3.4996 | 4.14 | 600 | 0.4110 | 3.1106 |
| 3.4055 | 5.52 | 800 | 0.3919 | 3.1465 |
| 3.1658 | 6.9 | 1000 | 0.4786 | 2.9087 |
| 3.1597 | 8.28 | 1200 | 0.4128 | 3.0067 |
| 2.9918 | 9.66 | 1400 | 0.4664 | 2.7497 |
| 2.8913 | 11.03 | 1600 | 0.4580 | 2.6409 |
| 2.8172 | 12.41 | 1800 | 0.4449 | 2.9132 |
| 2.9125 | 13.79 | 2000 | 0.5378 | 2.2727 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
SakataHalmi/Reinforce-Pixelcopter-PLE-v0 | SakataHalmi | 2023-09-29T10:09:25Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-28T20:27:16Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 68.80 +/- 55.98
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
RogerB/afriberta_base-kinyarwanda-finetuned | RogerB | 2023-09-29T10:03:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:castorini/afriberta_base",
"base_model:finetune:castorini/afriberta_base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-29T09:21:17Z | ---
base_model: castorini/afriberta_base
tags:
- generated_from_trainer
model-index:
- name: afriberta_base-kinyarwanda-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta_base-kinyarwanda-finetuned
This model is a fine-tuned version of [castorini/afriberta_base](https://huggingface.co/castorini/afriberta_base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.1683 | 1.0 | 5000 | 2.7855 |
| 2.8371 | 2.0 | 10000 | 2.6643 |
| 2.7277 | 3.0 | 15000 | 2.5899 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
language-ml-lab/classification-azb | language-ml-lab | 2023-09-29T09:39:44Z | 183 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"az",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-26T15:12:37Z | ---
language:
- az
metrics:
- accuracy
- f1
widget:
- text: کریم خان زندین اؤلومو ایله خانلیق یئنیدن موستقیل سیاست یئریتمگه باشلادی .
example_title: تاریخ
- text: کیمیا علیزاده زنوزی اصیللی ایرانلی تکواندو اویونچوسودور .
example_title: ایدمان
- text: خزر دنیزی بؤیوکلوگونه و بعضی فیزیکی جوغرافی علامتلرینه گؤره دونیانین ان بؤیوک گؤلودور .
example_title: جوغرافیا
- text: گولخانی اؤزبک کلاسیک شاعیری ، ادیبی ، یازیچی و اؤزبک ادبیاتینین ساتیریک مکتبینین قوروجولاریندان بیریدیر .
example_title: ادبیات
---
# Text Classification Model
- Type: Fine-tuned BERT-based text classification model
- Description: This model has been fine-tuned using [AzerBERT](https://huggingface.co/language-ml-lab/AzerBert) for text classification tasks. It is designed to categorize text into one of the following four categories: literature, sports, history, and geography.
## How to use
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="language-ml-lab/classification-azb")
```
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("language-ml-lab/classification-azb")
model = AutoModelForSequenceClassification.from_pretrained("language-ml-lab/classification-azb")
``` |
jiantongxu/mit-b0-scene-parse-150-lora | jiantongxu | 2023-09-29T09:36:42Z | 28 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-29T09:09:56Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
mangeshdiyewar/WizardMaths-fined_tuned | mangeshdiyewar | 2023-09-29T09:29:07Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-29T09:29:05Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
anonymousTheStackRepo/trained_checkpoints | anonymousTheStackRepo | 2023-09-29T09:28:10Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2023-05-22T19:56:57Z | ---
license: other
---
These versions of the model weights are strictly permitted for use exclusively in conjunction with the review process for the paper. Upon completion of the review process, a de-anonymized version of the model weights will be released under appropriate license.
|
developers-1/a-comprehensive-guide-to-attending-seattle-pride-fest | developers-1 | 2023-09-29T09:10:28Z | 0 | 0 | null | [
"region:us"
] | null | 2023-09-29T09:09:04Z | <p style="text-align: start;color: rgb(17, 19, 31);background-color: rgb(255, 255, 255);font-size: 20px;">Eager to join the vibrant celebration at Seattle PrideFest? Be prepared to march through the streets enveloped in hues of love and freedom.</p>
<div style="text-align: start;color: rgb(33, 37, 41);background-color: rgb(255, 255, 255);font-size: 16px;">
<p style="color: rgb(73, 78, 112);font-size: 20px;"><br></p>
<p style="color: rgb(73, 78, 112);font-size: 20px;">To aid your adventure, this extensive guide offers a historical walkthrough, a fashion handbook, and tips for what to pack, ensuring your <u><a href="https://www.seattlepridefest.org/" target="_blank" rel="nofollow" style="color: rgb(55, 125, 255);">Seattle PrideFest</a></u> experience is colorful, stylish, and memorable!</p>
<p style="color: rgb(73, 78, 112);font-size: 20px;">This guide delivers everything from a dive into the history of Seattle Pride Fest to sartorial tips ensuring you stand out while basking in the festivities.</p>
<h2 style="font-size: 2rem;">The Rainbow Path: History of Seattle PrideFest</h2>
<p style="color: rgb(73, 78, 112);font-size: 20px;">Seattle PrideFest, a celebration of love, equality, and the LGBTQ+ community, is the contemporary successor to a long tradition of Pride events in Seattle, dating back to the 1970s.</p>
<h3 style="font-size: 1.75rem;">Key Historical Highlights:</h3>
<ul style="list-style-type: none;">
<li>1974: Seattle's first Pride Week, a commemoration of the Stonewall Riots, lays the foundation.</li>
<li>2006: Seattle PrideFest, as we know it today, is born, taking over from Pride Week and growing in inclusivity and celebration.</li>
<li>Today: Seattle PrideFest stands as the largest free Pride Festival in the United States.</li>
</ul>
<h2 style="font-size: 2rem;">What to Wear to Seattle PrideFest?</h2>
<h3 style="font-size: 1.75rem;">1. Bathing in Colors:</h3>
<ul style="list-style-type: none;">
<li>Suggestion: Opt for clothing in <a href="https://www.dollskill.com/collections/rainbow-clothing" target="_blank" rel="nofollow" style="color: rgb(13, 110, 253);">vibrant rainbow colors</a>. A rainbow-striped dress, a multi-colored jumpsuit, or a shirt paired with a vibrant tutu can make you shine.</li>
</ul>
<h3 style="font-size: 1.75rem;">2. Comfort First:</h3>
<ul style="list-style-type: none;">
<li>Suggestion: Pick breathable, light fabrics. Consider a lightweight dress, comfortable shorts, or a relaxed tee to keep cool and comfy.</li>
</ul>
<h3 style="font-size: 1.75rem;">3. Footwear:</h3>
<ul style="list-style-type: none;">
<li>Suggestion: Choose comfortable and stylish footwear. Think colorful sneakers, fashionable sandals, or cute, flat boots to dance and walk in comfort.</li>
</ul>
<h2 style="font-size: 2rem;">Amp Your Style: Accessories & Make-up</h2>
<h3 style="font-size: 1.75rem;">1. Bold Accessories:</h3>
<ul style="list-style-type: none;">
<li>Suggestion: Choose oversized earrings, funky sunglasses, or colorful, chunky bracelets to make a statement.</li>
</ul>
<h3 style="font-size: 1.75rem;">2. Beauty and Makeup:</h3>
<ul style="list-style-type: none;">
<li>Suggestion: Think bright, glittery, and rainbow-themed makeup. Let your face mirror the festival's jubilance.</li>
</ul>
<h2 style="font-size: 2rem;">What to Bring to Seattle PrideFest?</h2>
<h3 style="font-size: 1.75rem;">1. Hydration:</h3>
<ul style="list-style-type: none;">
<li>Suggestion: Carry a refillable water bottle to stay hydrated amid the celebrations.</li>
</ul>
<h3 style="font-size: 1.75rem;">2. Sun Protection:</h3>
<ul style="list-style-type: none;">
<li>Suggestion: Pack sunscreen, a fashionable hat, and sunglasses to stay protected from the sun.</li>
</ul>
<h3 style="font-size: 1.75rem;">3. Charging Essentials:</h3>
<ul style="list-style-type: none;">
<li>Suggestion: Bring a portable charger to ensure your gadgets stay powered for capturing memories.</li>
</ul>
<h2 style="font-size: 2rem;">Maximize Your Seattle PrideFest Experience</h2>
<h3 style="font-size: 1.75rem;">1. Plan Ahead:</h3>
<ul style="list-style-type: none;">
<li>Suggestion: Research the event schedule and routes to plan your day efficiently.</li>
</ul>
<h3 style="font-size: 1.75rem;">2. Engagement:</h3>
<ul style="list-style-type: none;">
<li>Suggestion: Engage with the community, participate in activities, and enjoy performances.</li>
</ul>
<h3 style="font-size: 1.75rem;">3. Respect & Etiquette:</h3>
<ul style="list-style-type: none;">
<li>Suggestion: Maintain respect and courtesy for everyone’s unique expressions and identities.</li>
</ul>
<h2 style="font-size: 2rem;">Conclusion</h2>
<p style="color: rgb(73, 78, 112);font-size: 20px;">As you prepare for a spectacular celebration at <strong>Seattle PrideFest</strong>, let this guide be your companion, ensuring a seamless blend of style, comfort, and understanding of the event's historical backdrop. As you march in unity, swathed in colors of love and freedom, remember the roots of the <u><a href="https://www.dollskill.com/collections/pride-outfits" target="_blank" rel="nofollow" style="color: rgb(55, 125, 255);">Pride festival</a></u>, anchored in the fight for equality and love.</p>
<p style="color: rgb(73, 78, 112);font-size: 20px;">Embrace the kaleidoscope of colors, love, and unity, ensuring you not only stand out in your fabulous outfits, but also carry the spirit and significance of Pride within you. With each laughter, dance, and cheer, resonate the essence of love, equality, and freedom that <strong>Seattle PrideFest</strong> so beautifully embodies.</p>
<p style="color: rgb(73, 78, 112);font-size: 20px;">Enjoy every second, while also honoring the significance of the event. Get ready to unleash your colors, resonate love, and create beautiful memories at Seattle PrideFest!</p>
</div> |
GreenBitAI/LLaMA-3B-2bit-groupsize32 | GreenBitAI | 2023-09-29T09:10:25Z | 96 | 7 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-15T19:51:05Z | ---
license: apache-2.0
---
# GreenBit LLaMA
This is GreenBitAI's pretrained **2-bit** LLaMA model with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/low_bit_llama) for the code to run the model and more information.
## Zero-Shot Evaluation
| Task | Metric | LLaMA 3B q2g32 | LLaMA 3B q2g16 | LLaMA 3B q2g8 | LLaMA-1 7B q2g32 | LLaMA-2 7B q2g32 | LLaMA-2 7B q2g8 | LLaMA 3B FP16 | LLaMA-1 7B FP16 |
|---------------|----------|----------------|----------------|--------------|------------------|------------------|----------------|--------------|-----------------|
| Openbookqa | acc | 0.196 | 0.238 | 0.242 | 0.224 | 0.246 | 0.296 | 0.27 | 0.29 |
| | ac_norm | 0.332 | 0.358 | 0.362 | 0.388 | 0.376 | 0.4 | 0.4 | 0.41 |
| arc_challenge | acc | 0.279 | 0.2978 | 0.3148 | 0.3422 | 0.3268 | 0.3618 | 0.34 | 0.39 |
| | ac_norm | 0.2944 | 0.3319 | 0.3345 | 0.3387 | 0.3387 | 0.372 | 0.37 | 0.41 |
| hellawswag | acc | 0.4238 | 0.444 | 0.462 | 0.4996 | 0.4961 | 0.5379 | 0.49 | 0.68 |
| | ac_norm | 0.5685 | 0.5988 | 0.6242 | 0.6447 | 0.6464 | 0.7014 | 0.67 | 0.73 |
| piqa | acc | 0.7024 | 0.716 | 0.7291 | 0.7476 | 0.7503 | 0.7715 | 0.75 | 0.78 |
| | ac_norm | 0.7116 | 0.7247 | 0.7312 | 0.7443 | 0.7421 | 0.7568 | 0.76 | 0.78 |
| arc_easy | acc | 0.5997 | 0.646 | 0.6528 | 0.6061 | 0.6174 | 0.6254 | 0.69 | 0.68 |
| | ac_norm | 0.5417 | 0.58 | 0.5972 | 0.4566 | 0.4781 | 0.4958 | 0.65 | 0.52 |
| Winogrande | acc | 0.5683 | 0.5888 | 0.6054 | 0.6283 | 0.6298 | 0.6582 | 0.62 | 0.68 |
| boolq | acc | 0.6281 | 0.6636 | 0.6327 | 0.6425 | 0.7061 | 0.7242 | 0.68 | 0.75 |
| truthfulqa_mc | mc1 | 0.2509 | 0.2118 | 0.2252 | 0.224 | 0.2313 | 0.2399 | 0.22 | 0.21 |
| | mc2 | 0.3962 | 0.3501 | 0.3625 | 0.3702 | 0.3854 | 0.3795 | 0.35 | 0.34 |
| anli_r1 | acc | 0.337 | 0.334 | 0.344 | 0.331 | 0.333 | 0.363 | 0.33 | 0.35 |
| anli_r2 | acc | 0.335 | 0.332 | 0.331 | 0.326 | 0.349 | 0.347 | 0.32 | 0.34 |
| anli_r3 | acc | 0.3358 | 0.3383 | 0.3425 | 0.3417 | 0.36 | 0.3733 | 0.35 | 0.37 |
| wic | acc | 0.4984 | 0.5094 | 0.4969 | 0.4984 | 0.4953 | 0.489 | 0.48 | 0.5 |
| rte | acc | 0.5596 | 0.5993 | 0.5632 | 0.639 | 0.6065 | 0.6426 | 0.58 | 0.56 |
| record | f1 | 0.8502 | 0.8625 | 0.8687 | 0.8859 | 0.8872 | 0.9037 | 0.88 | 0.91 |
| | em | 0.8427 | 0.8545 | 0.8612 | 0.8781 | 0.8801 | 0.8959 | 0.89 | 0.91 |
| Average | | 0.4881 | 0.5037 | 0.5087 | 0.5122 | 0.5181 | 0.5391 | 0.528 | 0.5519 |

|
manishai/distilbert-base-uncased-finetuned-emotion | manishai | 2023-09-29T09:02:33Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-29T08:56:04Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.92
- name: F1
type: f1
value: 0.9195631718213454
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2272
- Accuracy: 0.92
- F1: 0.9196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8468 | 1.0 | 250 | 0.3426 | 0.897 | 0.8929 |
| 0.2636 | 2.0 | 500 | 0.2272 | 0.92 | 0.9196 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
vineetsharma/xsum-t5-small | vineetsharma | 2023-09-29T09:01:29Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-29T07:40:24Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: xsum-t5-small
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
config: default
split: validation
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.3309
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xsum-t5-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4789
- Rouge1: 28.3309
- Rouge2: 7.7568
- Rougel: 22.2948
- Rougelsum: 22.2942
- Gen Len: 18.824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.9158 | 0.16 | 2000 | 2.5725 | 26.6629 | 6.6436 | 20.8032 | 20.7995 | 18.7886 |
| 2.7868 | 0.31 | 4000 | 2.5286 | 27.3979 | 7.1077 | 21.4451 | 21.4487 | 18.8045 |
| 2.756 | 0.47 | 6000 | 2.5058 | 27.8049 | 7.4383 | 21.8465 | 21.8479 | 18.8179 |
| 2.7388 | 0.63 | 8000 | 2.4903 | 28.1541 | 7.6412 | 22.1566 | 22.1572 | 18.8265 |
| 2.7208 | 0.78 | 10000 | 2.4819 | 28.2559 | 7.6877 | 22.2086 | 22.2118 | 18.8268 |
| 2.7175 | 0.94 | 12000 | 2.4789 | 28.3309 | 7.7568 | 22.2948 | 22.2942 | 18.824 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
language-ml-lab/fasttext-azb | language-ml-lab | 2023-09-29T09:00:06Z | 65 | 0 | fasttext | [
"fasttext",
"feature-extraction",
"az",
"region:us"
] | feature-extraction | 2023-09-20T10:06:22Z | ---
pipeline_tag: feature-extraction
library_name: fasttext
widget:
- text: آلما
example_title: آلما
- text: بایرام
example_title: بایرام
- text: قارداش
example_title: قارداش
language:
- az
---
# Language Model-based Embedding (FastText)
- Type: FastText-based word embedding model
- Description: This model provides embeddings for Iranian Azerbaijani text using the FastText framework. It allows you to generate word embeddings for Iranian Azerbaijani words and phrases.
## How to use
Please ensure that you have FastText installed on your system.
```python
from huggingface_hub import hf_hub_download
import fasttext
model = fasttext.load_model(hf_hub_download("language-ml-lab/fasttext-azb", "model.bin"))
``` |
Subsets and Splits