modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-03-26 12:27:25
| downloads
int64 0
223M
| likes
int64 0
10.1k
| library_name
stringclasses 397
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-03-26 12:27:02
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
DunnBC22/vit-base-patch16-224-in21k_GI_diagnosis | DunnBC22 | "2023-05-13T00:13:17Z" | 45 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-01-06T07:28:25Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: vit-base-patch16-224-in21k_GI_diagnosis
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
language:
- en
pipeline_tag: image-classification
---
# vit-base-patch16-224-in21k_GI_diagnosis
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k).
It achieves the following results on the evaluation set:
- Loss: 0.2538
- Accuracy: 0.9375
- Weighted f1: 0.9365
- Micro f1: 0.9375
- Macro f1: 0.9365
- Weighted recall: 0.9375
- Micro recall: 0.9375
- Macro recall: 0.9375
- Weighted precision: 0.9455
- Micro precision: 0.9375
- Macro precision: 0.9455
## Model description
This is a multiclass image classification model of GI diagnosis'.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Diagnoses%20from%20Colonoscopy%20Images/diagnosis_from_colonoscopy_image_ViT.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/francismon/curated-colon-dataset-for-deep-learning
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 1.3805 | 1.0 | 200 | 0.5006 | 0.8638 | 0.8531 | 0.8638 | 0.8531 | 0.8638 | 0.8638 | 0.8638 | 0.9111 | 0.8638 | 0.9111 |
| 1.3805 | 2.0 | 400 | 0.2538 | 0.9375 | 0.9365 | 0.9375 | 0.9365 | 0.9375 | 0.9375 | 0.9375 | 0.9455 | 0.9375 | 0.9455 |
| 0.0628 | 3.0 | 600 | 0.5797 | 0.8812 | 0.8740 | 0.8812 | 0.8740 | 0.8812 | 0.8812 | 0.8813 | 0.9157 | 0.8812 | 0.9157 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1
- Datasets 2.5.2
- Tokenizers 0.12.1 |
mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF | mradermacher | "2024-10-31T16:37:08Z" | 15 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"en",
"base_model:Samsoup/Llama-3.2-3B-Instruct-HateXplain",
"base_model:quantized:Samsoup/Llama-3.2-3B-Instruct-HateXplain",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-10-31T16:23:44Z" | ---
base_model: Samsoup/Llama-3.2-3B-Instruct-HateXplain
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama-factory
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Samsoup/Llama-3.2-3B-Instruct-HateXplain
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 2.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 2.0 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 2.0 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-HateXplain-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-HateXplain.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
FullnameNameUser/DeepSeek-R1-Distill-Llama-8B-Q4_K_M-GGUF | FullnameNameUser | "2025-02-22T16:04:22Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-22T16:03:25Z" | ---
license: mit
library_name: transformers
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
tags:
- llama-cpp
- gguf-my-repo
---
# FullnameNameUser/DeepSeek-R1-Distill-Llama-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Llama-8B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo FullnameNameUser/DeepSeek-R1-Distill-Llama-8B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-llama-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo FullnameNameUser/DeepSeek-R1-Distill-Llama-8B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-llama-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo FullnameNameUser/DeepSeek-R1-Distill-Llama-8B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-llama-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo FullnameNameUser/DeepSeek-R1-Distill-Llama-8B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-llama-8b-q4_k_m.gguf -c 2048
```
|
skarsa/babe_source_subsamples_model_alpha_0_01_idx_1 | skarsa | "2025-02-11T11:37:17Z" | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-15T15:17:58Z" | ---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: babe_source_subsamples_model_alpha_0_01_idx_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# babe_source_subsamples_model_alpha_0_01_idx_1
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
BaiXuecheng/sd-class-butterflies-32 | BaiXuecheng | "2023-10-12T14:31:21Z" | 9 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2023-10-12T14:31:15Z" | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('BaiXuecheng/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
fifxus/75c1dafe-fb0e-46ef-9fcf-eabacdcff052 | fifxus | "2025-02-07T00:21:23Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-06T22:41:54Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 75c1dafe-fb0e-46ef-9fcf-eabacdcff052
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 93640c57fbb81292_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/93640c57fbb81292_train_data.json
type:
field_input: characters
field_instruction: situation
field_output: rot
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: fifxus/75c1dafe-fb0e-46ef-9fcf-eabacdcff052
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 500
micro_batch_size: 2
mlflow_experiment_name: /tmp/93640c57fbb81292_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: dafcc918-b64a-4c3f-959e-8b846e332c76
wandb_project: Gradients-On-10
wandb_run: your_name
wandb_runid: dafcc918-b64a-4c3f-959e-8b846e332c76
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 75c1dafe-fb0e-46ef-9fcf-eabacdcff052
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5444 | 0.0118 | 500 | 1.4801 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Hlias11/smalltalkpruned | Hlias11 | "2025-01-27T11:45:23Z" | 19 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-27T11:44:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jlbaker361/small_fine-tune_addition_decimal_whole | jlbaker361 | "2023-11-17T05:56:39Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-11-17T05:56:38Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
checkiejan/prefix-paraphase-50-19-auto | checkiejan | "2023-09-19T12:55:49Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-19T12:55:46Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
kk-aivio/2dc4d32d-4916-4765-b805-a3f1f0023763 | kk-aivio | "2025-01-23T10:32:48Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | "2025-01-23T10:31:33Z" | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2dc4d32d-4916-4765-b805-a3f1f0023763
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bc57cf348c51af33_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bc57cf348c51af33_train_data.json
type:
field_instruction: text
field_output: keywords
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/2dc4d32d-4916-4765-b805-a3f1f0023763
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/bc57cf348c51af33_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 70982028-58ca-4bdf-b00c-c386a65435ef
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 70982028-58ca-4bdf-b00c-c386a65435ef
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2dc4d32d-4916-4765-b805-a3f1f0023763
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0011 | 1 | nan |
| 0.0 | 0.0033 | 3 | nan |
| 0.0 | 0.0067 | 6 | nan |
| 0.0 | 0.0100 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
huggingtweets/imcummingonline | huggingtweets | "2021-05-22T07:59:27Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/imcummingonline/1617770513198/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1354295229654958081/FUhOGuYV_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ace 🤖 AI Bot </div>
<div style="font-size: 15px">@imcummingonline bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@imcummingonline's tweets](https://twitter.com/imcummingonline).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 914 |
| Retweets | 88 |
| Short tweets | 218 |
| Tweets kept | 608 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2yh36yxx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imcummingonline's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3nnnr0u8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3nnnr0u8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/imcummingonline')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
pkupie/Llama-2-7b-FLAN-step884 | pkupie | "2024-12-16T05:55:13Z" | 9 | 0 | null | [
"safetensors",
"llama",
"en",
"bo",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | "2024-12-07T06:00:51Z" | ---
license: llama2
language:
- en
- bo
base_model:
- meta-llama/Llama-2-7b-hf
---
A supervised fine-tuned model based on Llama-2-7b-hf.
We use the FLAN datasets for training.
#### Hyper-parameters:
* lr: 3e-5
* batch size: 0.25M (2K*128)
* lr scheduler: cosine
* min lr: 1e-5
* lr decay iters: 2048
## Citation
If you find this model is useful in your work, please cite it with:
```
@inproceedings{tao-etal-2024-unlocking,
title = "Unlocking the Potential of Model Merging for Low-Resource Languages",
author = "Tao, Mingxu and
Zhang, Chen and
Huang, Quzhe and
Ma, Tianyao and
Huang, Songfang and
Zhao, Dongyan and
Feng, Yansong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.508",
doi = "10.18653/v1/2024.findings-emnlp.508",
pages = "8705--8720"
}
``` |
FriendliAI/internlm3-8b-instruct | FriendliAI | "2025-03-06T02:30:35Z" | 0 | 0 | null | [
"safetensors",
"internlm3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2403.17297",
"license:apache-2.0",
"region:us"
] | text-generation | "2025-03-06T02:30:34Z" | ---
license: apache-2.0
pipeline_tag: text-generation
---
# InternLM
<div align="center">
<img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">HOT</font></i>
</a>
</sup>
<div> </div>
</div>
[](https://github.com/internLM/OpenCompass/)
[💻Github Repo](https://github.com/InternLM/InternLM) • [🤗Demo](https://huggingface.co/spaces/internlm/internlm3-8b-instruct) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new) • [📜Technical Report](https://arxiv.org/abs/2403.17297)
</div>
<p align="center">
👋 join us on <a href="https://discord.gg/xa29JuW87d" target="_blank">Discord</a> and <a href="https://github.com/InternLM/InternLM/assets/25839884/a6aad896-7232-4220-ac84-9e070c2633ce" target="_blank">WeChat</a>
</p>
## Introduction
InternLM3 has open-sourced an 8-billion parameter instruction model, InternLM3-8B-Instruct, designed for general-purpose usage and advanced reasoning. This model has the following characteristics:
- **Enhanced performance at reduced cost**:
State-of-the-art performance on reasoning and knowledge-intensive tasks surpass models like Llama3.1-8B and Qwen2.5-7B. Remarkably, InternLM3 is trained on only 4 trillion high-quality tokens, saving more than 75% of the training cost compared to other LLMs of similar scale.
- **Deep thinking capability**:
InternLM3 supports both the deep thinking mode for solving complicated reasoning tasks via the long chain-of-thought and the normal response mode for fluent user interactions.
## InternLM3-8B-Instruct
### Performance Evaluation
We conducted a comprehensive evaluation of InternLM using the open-source evaluation tool [OpenCompass](https://github.com/internLM/OpenCompass/). The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. Here are some of the evaluation results, and you can visit the [OpenCompass leaderboard](https://rank.opencompass.org.cn) for more evaluation results.
| | Benchmark | InternLM3-8B-Instruct | Qwen2.5-7B-Instruct | Llama3.1-8B-Instruct | GPT-4o-mini(closed source) |
| ------------ | ------------------------------- | --------------------- | ------------------- | -------------------- | -------------------------- |
| General | CMMLU(0-shot) | **83.1** | 75.8 | 53.9 | 66.0 |
| | MMLU(0-shot) | 76.6 | **76.8** | 71.8 | 82.7 |
| | MMLU-Pro(0-shot) | **57.6** | 56.2 | 48.1 | 64.1 |
| Reasoning | GPQA-Diamond(0-shot) | **37.4** | 33.3 | 24.2 | 42.9 |
| | DROP(0-shot) | **83.1** | 80.4 | 81.6 | 85.2 |
| | HellaSwag(10-shot) | **91.2** | 85.3 | 76.7 | 89.5 |
| | KOR-Bench(0-shot) | **56.4** | 44.6 | 47.7 | 58.2 |
| MATH | MATH-500(0-shot) | **83.0*** | 72.4 | 48.4 | 74.0 |
| | AIME2024(0-shot) | **20.0*** | 16.7 | 6.7 | 13.3 |
| Coding | LiveCodeBench(2407-2409 Pass@1) | **17.8** | 16.8 | 12.9 | 21.8 |
| | HumanEval(Pass@1) | 82.3 | **85.4** | 72.0 | 86.6 |
| Instrunction | IFEval(Prompt-Strict) | **79.3** | 71.7 | 75.2 | 79.7 |
| Long Context | RULER(4-128K Average) | 87.9 | 81.4 | **88.5** | 90.7 |
| Chat | AlpacaEval 2.0(LC WinRate) | **51.1** | 30.3 | 25.0 | 50.7 |
| | WildBench(Raw Score) | **33.1** | 23.3 | 1.5 | 40.3 |
| | MT-Bench-101(Score 1-10) | **8.59** | 8.49 | 8.37 | 8.87 |
- Values marked in bold indicate the **highest** in open source models
- The evaluation results were obtained from [OpenCompass](https://github.com/internLM/OpenCompass/) (some data marked with *, which means evaluating with Thinking Mode), and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/internLM/OpenCompass/).
- The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/internLM/OpenCompass/), so please refer to the latest evaluation results of [OpenCompass](https://github.com/internLM/OpenCompass/).
**Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
### Requirements
```python
transformers >= 4.48
```
### Conversation Mode
#### Transformers inference
To load the InternLM3 8B Instruct model using Transformers, use the following code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_dir = "internlm/internlm3-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
model = AutoModelForCausalLM.from_pretrained(model_dir, trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
# (Optional) If on low resource devices, you can load model in 4-bit or 8-bit to further save GPU memory via bitsandbytes.
# InternLM3 8B in 4bit will cost nearly 8GB GPU memory.
# pip install -U bitsandbytes
# 8-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_8bit=True)
# 4-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_4bit=True)
model = model.eval()
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文."""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Please tell me five scenic spots in Shanghai"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
generated_ids = model.generate(tokenized_chat, max_new_tokens=1024, temperature=1, repetition_penalty=1.005, top_k=40, top_p=0.8)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(tokenized_chat, generated_ids)
]
prompt = tokenizer.batch_decode(tokenized_chat)[0]
print(prompt)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
#### LMDeploy inference
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
```bash
pip install lmdeploy
```
You can run batch inference locally with the following python code:
```python
import lmdeploy
model_dir = "internlm/internlm3-8b-instruct"
pipe = lmdeploy.pipeline(model_dir)
response = pipe("Please tell me five scenic spots in Shanghai")
print(response)
```
Or you can launch an OpenAI compatible server with the following command:
```bash
lmdeploy serve api_server internlm/internlm3-8b-instruct --model-name internlm3-8b-instruct --server-port 23333
```
Then you can send a chat request to the server:
```bash
curl http://localhost:23333/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm3-8b-instruct",
"messages": [
{"role": "user", "content": "Please tell me five scenic spots in Shanghai"}
]
}'
```
Find more details in the [LMDeploy documentation](https://lmdeploy.readthedocs.io/en/latest/)
#### Ollama inference
First install ollama,
```python
# install ollama
curl -fsSL https://ollama.com/install.sh | sh
# fetch model
ollama pull internlm/internlm3-8b-instruct
# install
pip install ollama
```
inference code,
```python
import ollama
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文."""
messages = [
{
"role": "system",
"content": system_prompt,
},
{
"role": "user",
"content": "Please tell me five scenic spots in Shanghai"
},
]
stream = ollama.chat(
model='internlm/internlm3-8b-instruct',
messages=messages,
stream=True,
)
for chunk in stream:
print(chunk['message']['content'], end='', flush=True)
```
#### vLLM inference
Refer to [installation](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) to install the latest code of vllm
```python
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
```
inference code:
```python
from vllm import LLM, SamplingParams
llm = LLM(model="internlm/internlm3-8b-instruct")
sampling_params = SamplingParams(temperature=1, repetition_penalty=1.005, top_k=40, top_p=0.8)
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文."""
prompts = [
{
"role": "system",
"content": system_prompt,
},
{
"role": "user",
"content": "Please tell me five scenic spots in Shanghai"
},
]
outputs = llm.chat(prompts,
sampling_params=sampling_params,
use_tqdm=False)
print(outputs)
```
### Thinking Mode
#### Thinking Demo
<img src="https://github.com/InternLM/InternLM/blob/017ba7446d20ecc3b9ab8e7b66cc034500868ab4/assets/solve_puzzle.png?raw=true" width="400"/>
#### Thinking system prompt
```python
thinking_system_prompt = """You are an expert mathematician with extensive experience in mathematical competitions. You approach problems through systematic thinking and rigorous reasoning. When solving problems, follow these thought processes:
## Deep Understanding
Take time to fully comprehend the problem before attempting a solution. Consider:
- What is the real question being asked?
- What are the given conditions and what do they tell us?
- Are there any special restrictions or assumptions?
- Which information is crucial and which is supplementary?
## Multi-angle Analysis
Before solving, conduct thorough analysis:
- What mathematical concepts and properties are involved?
- Can you recall similar classic problems or solution methods?
- Would diagrams or tables help visualize the problem?
- Are there special cases that need separate consideration?
## Systematic Thinking
Plan your solution path:
- Propose multiple possible approaches
- Analyze the feasibility and merits of each method
- Choose the most appropriate method and explain why
- Break complex problems into smaller, manageable steps
## Rigorous Proof
During the solution process:
- Provide solid justification for each step
- Include detailed proofs for key conclusions
- Pay attention to logical connections
- Be vigilant about potential oversights
## Repeated Verification
After completing your solution:
- Verify your results satisfy all conditions
- Check for overlooked special cases
- Consider if the solution can be optimized or simplified
- Review your reasoning process
Remember:
1. Take time to think thoroughly rather than rushing to an answer
2. Rigorously prove each key conclusion
3. Keep an open mind and try different approaches
4. Summarize valuable problem-solving methods
5. Maintain healthy skepticism and verify multiple times
Your response should reflect deep mathematical understanding and precise logical thinking, making your solution path and reasoning clear to others.
When you're ready, present your complete solution with:
- Clear problem understanding
- Detailed solution process
- Key insights
- Thorough verification
Focus on clear, logical progression of ideas and thorough explanation of your mathematical reasoning. Provide answers in the same language as the user asking the question, repeat the final answer using a '\\boxed{}' without any units, you have [[8192]] tokens to complete the answer.
"""
```
#### Transformers inference
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_dir = "internlm/internlm3-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
model = AutoModelForCausalLM.from_pretrained(model_dir, trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
# (Optional) If on low resource devices, you can load model in 4-bit or 8-bit to further save GPU memory via bitsandbytes.
# InternLM3 8B in 4bit will cost nearly 8GB GPU memory.
# pip install -U bitsandbytes
# 8-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_8bit=True)
# 4-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_4bit=True)
model = model.eval()
messages = [
{"role": "system", "content": thinking_system_prompt},
{"role": "user", "content": "Given the function\(f(x)=\mathrm{e}^{x}-ax - a^{3}\),\n(1) When \(a = 1\), find the equation of the tangent line to the curve \(y = f(x)\) at the point \((1,f(1))\).\n(2) If \(f(x)\) has a local minimum and the minimum value is less than \(0\), determine the range of values for \(a\)."},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
generated_ids = model.generate(tokenized_chat, max_new_tokens=8192)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(tokenized_chat, generated_ids)
]
prompt = tokenizer.batch_decode(tokenized_chat)[0]
print(prompt)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
#### LMDeploy inference
LMDeploy is a toolkit for compressing, deploying, and serving LLM.
```bash
pip install lmdeploy
```
You can run batch inference locally with the following python code:
```python
from lmdeploy import pipeline, GenerationConfig, ChatTemplateConfig
model_dir = "internlm/internlm3-8b-instruct"
chat_template_config = ChatTemplateConfig(model_name='internlm3')
pipe = pipeline(model_dir, chat_template_config=chat_template_config)
messages = [
{"role": "system", "content": thinking_system_prompt},
{"role": "user", "content": "Given the function\(f(x)=\mathrm{e}^{x}-ax - a^{3}\),\n(1) When \(a = 1\), find the equation of the tangent line to the curve \(y = f(x)\) at the point \((1,f(1))\).\n(2) If \(f(x)\) has a local minimum and the minimum value is less than \(0\), determine the range of values for \(a\)."},
]
response = pipe(messages, gen_config=GenerationConfig(max_new_tokens=2048))
print(response)
```
#### Ollama inference
First install ollama,
```python
# install ollama
curl -fsSL https://ollama.com/install.sh | sh
# fetch model
ollama pull internlm/internlm3-8b-instruct
# install
pip install ollama
```
inference code,
```python
import ollama
messages = [
{
"role": "system",
"content": thinking_system_prompt,
},
{
"role": "user",
"content": "Given the function\(f(x)=\mathrm{e}^{x}-ax - a^{3}\),\n(1) When \(a = 1\), find the equation of the tangent line to the curve \(y = f(x)\) at the point \((1,f(1))\).\n(2) If \(f(x)\) has a local minimum and the minimum value is less than \(0\), determine the range of values for \(a\)."
},
]
stream = ollama.chat(
model='internlm/internlm3-8b-instruct',
messages=messages,
stream=True,
)
for chunk in stream:
print(chunk['message']['content'], end='', flush=True)
```
####
#### vLLM inference
Refer to [installation](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) to install the latest code of vllm
```python
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
```
inference code
```python
from vllm import LLM, SamplingParams
llm = LLM(model="internlm/internlm3-8b-instruct")
sampling_params = SamplingParams(temperature=1, repetition_penalty=1.005, top_k=40, top_p=0.8, max_tokens=8192)
prompts = [
{
"role": "system",
"content": thinking_system_prompt,
},
{
"role": "user",
"content": "Given the function\(f(x)=\mathrm{e}^{x}-ax - a^{3}\),\n(1) When \(a = 1\), find the equation of the tangent line to the curve \(y = f(x)\) at the point \((1,f(1))\).\n(2) If \(f(x)\) has a local minimum and the minimum value is less than \(0\), determine the range of values for \(a\)."
},
]
outputs = llm.chat(prompts,
sampling_params=sampling_params,
use_tqdm=False)
print(outputs)
```
## Open Source License
Code and model weights are licensed under Apache-2.0.
## Citation
```
@misc{cai2024internlm2,
title={InternLM2 Technical Report},
author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin},
year={2024},
eprint={2403.17297},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## 简介
### InternLM3-8B-Instruct
InternLM3,即书生·浦语大模型第3代,开源了80亿参数,面向通用使用与高阶推理的指令模型(InternLM3-8B-Instruct)。模型具备以下特点:
- **更低的代价取得更高的性能**:
在推理、知识类任务上取得同量级最优性能,超过Llama3.1-8B和Qwen2.5-7B。值得关注的是InternLM3只用了4万亿词元进行训练,对比同级别模型训练成本节省75%以上。
- **深度思考能力**:
InternLM3支持通过长思维链求解复杂推理任务的深度思考模式,同时还兼顾了用户体验更流畅的通用回复模式。
#### 性能评测
我们使用开源评测工具 [OpenCompass](https://github.com/internLM/OpenCompass/) 从学科综合能力、语言能力、知识能力、推理能力、理解能力五大能力维度对InternLM开展全面评测,部分评测结果如下表所示,欢迎访问[ OpenCompass 榜单 ](https://rank.opencompass.org.cn)获取更多的评测结果。
| | 评测集\模型 | InternLM3-8B-Instruct | Qwen2.5-7B-Instruct | Llama3.1-8B-Instruct | GPT-4o-mini(闭源) |
| ------------ | ------------------------------- | --------------------- | ------------------- | -------------------- | ----------------- |
| General | CMMLU(0-shot) | **83.1** | 75.8 | 53.9 | 66.0 |
| | MMLU(0-shot) | 76.6 | **76.8** | 71.8 | 82.7 |
| | MMLU-Pro(0-shot) | **57.6** | 56.2 | 48.1 | 64.1 |
| Reasoning | GPQA-Diamond(0-shot) | **37.4** | 33.3 | 24.2 | 42.9 |
| | DROP(0-shot) | **83.1** | 80.4 | 81.6 | 85.2 |
| | HellaSwag(10-shot) | **91.2** | 85.3 | 76.7 | 89.5 |
| | KOR-Bench(0-shot) | **56.4** | 44.6 | 47.7 | 58.2 |
| MATH | MATH-500(0-shot) | **83.0*** | 72.4 | 48.4 | 74.0 |
| | AIME2024(0-shot) | **20.0*** | 16.7 | 6.7 | 13.3 |
| Coding | LiveCodeBench(2407-2409 Pass@1) | **17.8** | 16.8 | 12.9 | 21.8 |
| | HumanEval(Pass@1) | 82.3 | **85.4** | 72.0 | 86.6 |
| Instrunction | IFEval(Prompt-Strict) | **79.3** | 71.7 | 75.2 | 79.7 |
| LongContext | RULER(4-128K Average) | 87.9 | 81.4 | **88.5** | 90.7 |
| Chat | AlpacaEval 2.0(LC WinRate) | **51.1** | 30.3 | 25.0 | 50.7 |
| | WildBench(Raw Score) | **33.1** | 23.3 | 1.5 | 40.3 |
| | MT-Bench-101(Score 1-10) | **8.59** | 8.49 | 8.37 | 8.87 |
- 表中标粗的数值表示在对比的开源模型中的最高值。
- 以上评测结果基于 [OpenCompass](https://github.com/internLM/OpenCompass/) 获得(部分数据标注`*`代表使用深度思考模式进行评测),具体测试细节可参见 [OpenCompass](https://github.com/internLM/OpenCompass/) 中提供的配置文件。
- 评测数据会因 [OpenCompass](https://github.com/internLM/OpenCompass/) 的版本迭代而存在数值差异,请以 [OpenCompass](https://github.com/internLM/OpenCompass/) 最新版的评测结果为主。
**局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。
#### 依赖
```python
transformers >= 4.48
```
#### 常规对话模式
##### Transformers 推理
通过以下的代码加载 InternLM3 8B Instruct 模型
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_dir = "internlm/internlm3-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
model = AutoModelForCausalLM.from_pretrained(model_dir, trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
# (Optional) If on low resource devices, you can load model in 4-bit or 8-bit to further save GPU memory via bitsandbytes.
# InternLM3 8B in 4bit will cost nearly 8GB GPU memory.
# pip install -U bitsandbytes
# 8-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_8bit=True)
# 4-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_4bit=True)
model = model.eval()
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文."""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Please tell me five scenic spots in Shanghai"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
generated_ids = model.generate(tokenized_chat, max_new_tokens=1024, temperature=1, repetition_penalty=1.005, top_k=40, top_p=0.8)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(tokenized_chat, generated_ids)
]
prompt = tokenizer.batch_decode(tokenized_chat)[0]
print(prompt)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
##### LMDeploy 推理
LMDeploy 是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。
```bash
pip install lmdeploy
```
你可以使用以下 python 代码进行本地批量推理:
```python
import lmdeploy
model_dir = "internlm/internlm3-8b-instruct"
pipe = lmdeploy.pipeline(model_dir)
response = pipe(["Please tell me five scenic spots in Shanghai"])
print(response)
```
或者你可以使用以下命令启动兼容 OpenAI API 的服务:
```bash
lmdeploy serve api_server internlm/internlm3-8b-instruct --model-name internlm3-8b-instruct --server-port 23333
```
然后你可以向服务端发起一个聊天请求:
```bash
curl http://localhost:23333/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm3-8b-instruct",
"messages": [
{"role": "user", "content": "介绍一下深度学习。"}
]
}'
```
更多信息请查看 [LMDeploy 文档](https://lmdeploy.readthedocs.io/en/latest/)
##### Ollama 推理
准备工作
```python
# install ollama
curl -fsSL https://ollama.com/install.sh | sh
# fetch 模型
ollama pull internlm/internlm3-8b-instruct
# install python库
pip install ollama
```
推理代码
```python
import ollama
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文."""
messages = [
{
"role": "system",
"content": system_prompt,
},
{
"role": "user",
"content": "Please tell me five scenic spots in Shanghai"
},
]
stream = ollama.chat(
model='internlm/internlm3-8b-instruct',
messages=messages,
stream=True,
)
for chunk in stream:
print(chunk['message']['content'], end='', flush=True)
```
####
##### vLLM 推理
参考[文档](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) 安装 vllm 最新代码
```bash
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
```
推理代码
```python
from vllm import LLM, SamplingParams
llm = LLM(model="internlm/internlm3-8b-instruct")
sampling_params = SamplingParams(temperature=1, repetition_penalty=1.005, top_k=40, top_p=0.8)
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文."""
prompts = [
{
"role": "system",
"content": system_prompt,
},
{
"role": "user",
"content": "Please tell me five scenic spots in Shanghai"
},
]
outputs = llm.chat(prompts,
sampling_params=sampling_params,
use_tqdm=False)
print(outputs)
```
#### 深度思考模式
##### 深度思考 Demo
<img src="https://github.com/InternLM/InternLM/blob/017ba7446d20ecc3b9ab8e7b66cc034500868ab4/assets/solve_puzzle.png?raw=true" width="400"/>
##### 深度思考 system prompt
```python
thinking_system_prompt = """You are an expert mathematician with extensive experience in mathematical competitions. You approach problems through systematic thinking and rigorous reasoning. When solving problems, follow these thought processes:
## Deep Understanding
Take time to fully comprehend the problem before attempting a solution. Consider:
- What is the real question being asked?
- What are the given conditions and what do they tell us?
- Are there any special restrictions or assumptions?
- Which information is crucial and which is supplementary?
## Multi-angle Analysis
Before solving, conduct thorough analysis:
- What mathematical concepts and properties are involved?
- Can you recall similar classic problems or solution methods?
- Would diagrams or tables help visualize the problem?
- Are there special cases that need separate consideration?
## Systematic Thinking
Plan your solution path:
- Propose multiple possible approaches
- Analyze the feasibility and merits of each method
- Choose the most appropriate method and explain why
- Break complex problems into smaller, manageable steps
## Rigorous Proof
During the solution process:
- Provide solid justification for each step
- Include detailed proofs for key conclusions
- Pay attention to logical connections
- Be vigilant about potential oversights
## Repeated Verification
After completing your solution:
- Verify your results satisfy all conditions
- Check for overlooked special cases
- Consider if the solution can be optimized or simplified
- Review your reasoning process
Remember:
1. Take time to think thoroughly rather than rushing to an answer
2. Rigorously prove each key conclusion
3. Keep an open mind and try different approaches
4. Summarize valuable problem-solving methods
5. Maintain healthy skepticism and verify multiple times
Your response should reflect deep mathematical understanding and precise logical thinking, making your solution path and reasoning clear to others.
When you're ready, present your complete solution with:
- Clear problem understanding
- Detailed solution process
- Key insights
- Thorough verification
Focus on clear, logical progression of ideas and thorough explanation of your mathematical reasoning. Provide answers in the same language as the user asking the question, repeat the final answer using a '\\boxed{}' without any units, you have [[8192]] tokens to complete the answer.
"""
```
##### Transformers 推理
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_dir = "internlm/internlm3-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
model = AutoModelForCausalLM.from_pretrained(model_dir, trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
# (Optional) If on low resource devices, you can load model in 4-bit or 8-bit to further save GPU memory via bitsandbytes.
# InternLM3 8B in 4bit will cost nearly 8GB GPU memory.
# pip install -U bitsandbytes
# 8-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_8bit=True)
# 4-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_4bit=True)
model = model.eval()
messages = [
{"role": "system", "content": thinking_system_prompt},
{"role": "user", "content": "已知函数\(f(x)=\mathrm{e}^{x}-ax - a^{3}\)。\n(1)当\(a = 1\)时,求曲线\(y = f(x)\)在点\((1,f(1))\)处的切线方程;\n(2)若\(f(x)\)有极小值,且极小值小于\(0\),求\(a\)的取值范围。"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
generated_ids = model.generate(tokenized_chat, max_new_tokens=8192)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(tokenized_chat, generated_ids)
]
prompt = tokenizer.batch_decode(tokenized_chat)[0]
print(prompt)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
##### LMDeploy 推理
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
```bash
pip install lmdeploy
```
You can run batch inference locally with the following python code:
```python
from lmdeploy import pipeline, GenerationConfig, ChatTemplateConfig
model_dir = "internlm/internlm3-8b-instruct"
chat_template_config = ChatTemplateConfig(model_name='internlm3')
pipe = pipeline(model_dir, chat_template_config=chat_template_config)
messages = [
{"role": "system", "content": thinking_system_prompt},
{"role": "user", "content": "已知函数\(f(x)=\mathrm{e}^{x}-ax - a^{3}\)。\n(1)当\(a = 1\)时,求曲线\(y = f(x)\)在点\((1,f(1))\)处的切线方程;\n(2)若\(f(x)\)有极小值,且极小值小于\(0\),求\(a\)的取值范围。"},
]
response = pipe(messages, gen_config=GenerationConfig(max_new_tokens=2048))
print(response)
```
##### Ollama 推理
准备工作
```python
# install ollama
curl -fsSL https://ollama.com/install.sh | sh
# fetch 模型
ollama pull internlm/internlm3-8b-instruct
# install python库
pip install ollama
```
inference code,
```python
import ollama
messages = [
{
"role": "system",
"content": thinking_system_prompt,
},
{
"role": "user",
"content": "Given the function\(f(x)=\mathrm{e}^{x}-ax - a^{3}\),\n(1) When \(a = 1\), find the equation of the tangent line to the curve \(y = f(x)\) at the point \((1,f(1))\).\n(2) If \(f(x)\) has a local minimum and the minimum value is less than \(0\), determine the range of values for \(a\)."
},
]
stream = ollama.chat(
model='internlm/internlm3-8b-instruct',
messages=messages,
stream=True,
)
for chunk in stream:
print(chunk['message']['content'], end='', flush=True)
```
####
##### vLLM 推理
参考[文档](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) 安装 vllm 最新代码
```bash
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
```
推理代码
```python
from vllm import LLM, SamplingParams
llm = LLM(model="internlm/internlm3-8b-instruct")
sampling_params = SamplingParams(temperature=1, repetition_penalty=1.005, top_k=40, top_p=0.8, max_tokens=8192)
prompts = [
{
"role": "system",
"content": thinking_system_prompt,
},
{
"role": "user",
"content": "已知函数\(f(x)=\mathrm{e}^{x}-ax - a^{3}\)。\n(1)当\(a = 1\)时,求曲线\(y = f(x)\)在点\((1,f(1))\)处的切线方程;\n(2)若\(f(x)\)有极小值,且极小值小于\(0\),求\(a\)的取值范围。"
},
]
outputs = llm.chat(prompts,
sampling_params=sampling_params,
use_tqdm=False)
print(outputs)
```
## 开源许可证
本仓库的代码和权重依照 Apache-2.0 协议开源。
## 引用
```
@misc{cai2024internlm2,
title={InternLM2 Technical Report},
author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin},
year={2024},
eprint={2403.17297},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
theprint/Mistral-7b-Instruct-v0.2-python-18k | theprint | "2024-06-10T19:36:03Z" | 16 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-04-13T01:10:59Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gokuls/hBERTv1_no_pretrain_rte | gokuls | "2023-06-15T09:38:15Z" | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-29T10:25:42Z" | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_no_pretrain_rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5270758122743683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_no_pretrain_rte
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6919
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7561 | 1.0 | 26 | 0.6977 | 0.4729 |
| 0.7108 | 2.0 | 52 | 0.7333 | 0.4729 |
| 0.7378 | 3.0 | 78 | 0.6919 | 0.5271 |
| 0.7045 | 4.0 | 104 | 0.7052 | 0.5271 |
| 0.7077 | 5.0 | 130 | 0.7034 | 0.5271 |
| 0.6816 | 6.0 | 156 | 0.7515 | 0.5343 |
| 0.6692 | 7.0 | 182 | 0.7616 | 0.5235 |
| 0.5846 | 8.0 | 208 | 0.9617 | 0.4838 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
RayBoustany/Covid-Chatbot-Phi2-Merged | RayBoustany | "2024-04-04T12:28:55Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-04T11:51:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
anandanand84/llama3-8b-otc-unsloth | anandanand84 | "2024-04-29T01:47:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b",
"base_model:finetune:unsloth/llama-3-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-04-29T01:46:52Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b
---
# Uploaded model
- **Developed by:** anandanand84
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/functionary-small-v2.2-GGUF | mradermacher | "2024-12-30T20:39:40Z" | 16 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:codys12/functionary-small-v2.2",
"base_model:quantized:codys12/functionary-small-v2.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-30T19:14:17Z" | ---
base_model: codys12/functionary-small-v2.2
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/codys12/functionary-small-v2.2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/functionary-small-v2.2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v2.2-GGUF/resolve/main/functionary-small-v2.2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v2.2-GGUF/resolve/main/functionary-small-v2.2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v2.2-GGUF/resolve/main/functionary-small-v2.2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v2.2-GGUF/resolve/main/functionary-small-v2.2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v2.2-GGUF/resolve/main/functionary-small-v2.2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v2.2-GGUF/resolve/main/functionary-small-v2.2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v2.2-GGUF/resolve/main/functionary-small-v2.2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v2.2-GGUF/resolve/main/functionary-small-v2.2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v2.2-GGUF/resolve/main/functionary-small-v2.2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v2.2-GGUF/resolve/main/functionary-small-v2.2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v2.2-GGUF/resolve/main/functionary-small-v2.2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v2.2-GGUF/resolve/main/functionary-small-v2.2.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
alexm-nm/tinyllama-24-marlin24-4bit-g128 | alexm-nm | "2024-05-08T13:37:29Z" | 15,404 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-05-08T13:32:27Z" | ---
license: apache-2.0
---
|
sgbyteninja/sentiment_analysis_with_roBERTa | sgbyteninja | "2025-03-24T11:34:08Z" | 23 | 0 | null | [
"safetensors",
"roberta",
"license:apache-2.0",
"region:us"
] | null | "2025-03-20T09:06:51Z" | ---
license: apache-2.0
---
|
diogopaes10/012-microsoft-deberta-v3-base-finetuned-yahoo-8000_2000 | diogopaes10 | "2023-07-22T21:44:59Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-07-22T21:38:25Z" | ---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: 012-microsoft-deberta-v3-base-finetuned-yahoo-8000_2000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 012-microsoft-deberta-v3-base-finetuned-yahoo-8000_2000
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9425
- F1: 0.7138
- Accuracy: 0.718
- Precision: 0.7184
- Recall: 0.718
- System Ram Used: 4.1370
- System Ram Total: 83.4807
- Gpu Ram Allocated: 2.0897
- Gpu Ram Cached: 25.8555
- Gpu Ram Total: 39.5640
- Gpu Utilization: 46
- Disk Space Used: 29.6434
- Disk Space Total: 78.1898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall | System Ram Used | System Ram Total | Gpu Ram Allocated | Gpu Ram Cached | Gpu Ram Total | Gpu Utilization | Disk Space Used | Disk Space Total |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|:------:|:---------------:|:----------------:|:-----------------:|:--------------:|:-------------:|:---------------:|:---------------:|:----------------:|
| 2.2963 | 0.2 | 50 | 2.2150 | 0.1298 | 0.2015 | 0.2090 | 0.2015 | 3.9807 | 83.4807 | 2.0898 | 25.8457 | 39.5640 | 48 | 24.8073 | 78.1898 |
| 1.8843 | 0.4 | 100 | 1.4590 | 0.5588 | 0.592 | 0.6418 | 0.592 | 3.9979 | 83.4807 | 2.0898 | 25.8477 | 39.5640 | 49 | 24.8074 | 78.1898 |
| 1.3348 | 0.6 | 150 | 1.1809 | 0.6613 | 0.668 | 0.6736 | 0.668 | 3.9836 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 49 | 24.8074 | 78.1898 |
| 1.1501 | 0.8 | 200 | 1.0484 | 0.6929 | 0.695 | 0.6981 | 0.695 | 3.9695 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 51 | 24.8074 | 78.1898 |
| 1.0842 | 1.0 | 250 | 1.0265 | 0.6825 | 0.6905 | 0.6894 | 0.6905 | 3.9755 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 50 | 24.8075 | 78.1898 |
| 0.8618 | 1.2 | 300 | 0.9904 | 0.7024 | 0.704 | 0.7048 | 0.704 | 3.9708 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 50 | 24.8075 | 78.1898 |
| 0.9329 | 1.4 | 350 | 0.9927 | 0.6825 | 0.686 | 0.6939 | 0.686 | 3.9595 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 48 | 24.8076 | 78.1898 |
| 0.9053 | 1.6 | 400 | 0.9795 | 0.7021 | 0.705 | 0.7048 | 0.705 | 3.9837 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 48 | 24.8076 | 78.1898 |
| 0.9173 | 1.8 | 450 | 0.9749 | 0.7024 | 0.709 | 0.7140 | 0.709 | 3.9851 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 48 | 24.8077 | 78.1898 |
| 0.9189 | 2.0 | 500 | 0.9425 | 0.7138 | 0.718 | 0.7184 | 0.718 | 3.9949 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 48 | 24.8077 | 78.1898 |
| 0.7727 | 2.2 | 550 | 0.9590 | 0.7101 | 0.7155 | 0.7150 | 0.7155 | 4.1847 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 45 | 29.6429 | 78.1898 |
| 0.7092 | 2.4 | 600 | 0.9389 | 0.7180 | 0.7215 | 0.7177 | 0.7215 | 4.1798 | 83.4807 | 2.0901 | 25.8555 | 39.5640 | 47 | 29.6429 | 78.1898 |
| 0.737 | 2.6 | 650 | 0.9606 | 0.7074 | 0.715 | 0.7144 | 0.715 | 4.1766 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 51 | 29.6430 | 78.1898 |
| 0.7334 | 2.8 | 700 | 0.9348 | 0.7175 | 0.72 | 0.7180 | 0.72 | 4.1699 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 50 | 29.6430 | 78.1898 |
| 0.7316 | 3.0 | 750 | 0.9407 | 0.7230 | 0.7275 | 0.7238 | 0.7275 | 4.1785 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 50 | 29.6431 | 78.1898 |
| 0.6045 | 3.2 | 800 | 0.9300 | 0.7208 | 0.721 | 0.7253 | 0.721 | 4.1864 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 48 | 29.6431 | 78.1898 |
| 0.6262 | 3.4 | 850 | 0.9416 | 0.7165 | 0.7175 | 0.7184 | 0.7175 | 4.1847 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 49 | 29.6431 | 78.1898 |
| 0.5999 | 3.6 | 900 | 0.9542 | 0.7155 | 0.718 | 0.7156 | 0.718 | 4.1891 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 47 | 29.6431 | 78.1898 |
| 0.6436 | 3.8 | 950 | 0.9580 | 0.7085 | 0.7115 | 0.7127 | 0.7115 | 4.1644 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 49 | 29.6431 | 78.1898 |
| 0.59 | 4.0 | 1000 | 0.9476 | 0.7209 | 0.723 | 0.7208 | 0.723 | 4.1608 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 47 | 29.6432 | 78.1898 |
| 0.5422 | 4.2 | 1050 | 0.9658 | 0.7201 | 0.7205 | 0.7224 | 0.7205 | 4.1462 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 46 | 31.7150 | 78.1898 |
| 0.5205 | 4.4 | 1100 | 0.9674 | 0.7122 | 0.7155 | 0.7128 | 0.7155 | 4.1598 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 49 | 31.7151 | 78.1898 |
| 0.5253 | 4.6 | 1150 | 0.9563 | 0.7175 | 0.7195 | 0.7185 | 0.7195 | 4.1854 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 49 | 31.7151 | 78.1898 |
| 0.5109 | 4.8 | 1200 | 0.9621 | 0.7201 | 0.722 | 0.7192 | 0.722 | 4.1908 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 49 | 31.7151 | 78.1898 |
| 0.5216 | 5.0 | 1250 | 0.9635 | 0.7190 | 0.7215 | 0.7189 | 0.7215 | 4.1862 | 83.4807 | 2.0898 | 25.8555 | 39.5640 | 50 | 31.7151 | 78.1898 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
botisan-ai/mt5-translate-zh-yue | botisan-ai | "2023-11-14T05:53:37Z" | 92 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"zh",
"yue",
"multilingual",
"dataset:x-tech/cantonese-mandarin-translations",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
language:
- zh
- yue
- multilingual
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- x-tech/cantonese-mandarin-translations
base_model: google/mt5-base
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on dataset [x-tech/cantonese-mandarin-translations](https://huggingface.co/datasets/x-tech/cantonese-mandarin-translations).
## Model description
The model translates Mandarin sentences to Cantonese.
## Intended uses & limitations
When you use the model, please make sure to add `translate mandarin to cantonese: <sentence>` (please note the space after colon) before the text you want to translate.
## Training and evaluation data
Training Dataset: [x-tech/cantonese-mandarin-translations](https://huggingface.co/datasets/x-tech/cantonese-mandarin-translations)
## Training procedure
Training is based on [example in transformers library](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
Since we still need to set up validation set, we do not have any training results yet.
### Framework versions
- Transformers 4.12.5
- Pytorch 1.8.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
twodigit/price20 | twodigit | "2024-12-17T03:43:39Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-17T03:38:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cs-giung/convnext-v1-small-imagenet21k | cs-giung | "2024-06-01T16:38:26Z" | 197 | 0 | transformers | [
"transformers",
"safetensors",
"convnext",
"image-classification",
"arxiv:2201.03545",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-01T16:29:34Z" | ---
license: apache-2.0
---
# ConvNext
ConvNext model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545).
The weights were converted from the `convnext_small_22k_224.pth` file presented in the [official repository](https://github.com/facebookresearch/ConvNeXt).
|
devashishg/my-trained-model | devashishg | "2024-03-10T08:40:29Z" | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | "2024-03-10T08:40:27Z" |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a man closeup portrait
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
sidraina/ppo-LunarLander-v2 | sidraina | "2023-05-01T05:07:36Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-04-30T06:47:23Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.20 +/- 26.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
import gym
from huggingface_sb3 import load_from_hub, package_to_hub, push_to_hub
from huggingface_hub import notebook_login # To log to our Hugging Face account to be able to upload models to the Hub.
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.env_util import make_vec_env
# Create the environment
env = make_vec_env('LunarLander-v2', n_envs=16)
# Define a PPO MlpPolicy architecture
model = PPO(
policy = 'MlpPolicy',
env = env,
n_steps = 1024,
batch_size = 64,
n_epochs = 4,
gamma = 0.999,
gae_lambda = 0.98,
ent_coef = 0.01,
verbose=1)
# Train the policy for 1,000,000 timesteps
model.learn(total_timesteps=int(1e6))
model_name = "lunar-landing-agent-sid"
model.save(model_name)
# Evaluate policy
# Create a new environment for evaluation
eval_env = gym.make("LunarLander-v2")
# Evaluate the model with 10 evaluation episodes and deterministic=True
mean_reward, std_reward = evaluate_policy(model, eval_env,10, True)
# Print the results
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Package to hub
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3.common.env_util import make_vec_env
from huggingface_sb3 import package_to_hub
repo_id = "sidraina/ppo-LunarLander-v2"
env_id = "LunarLander-v2"
# Create the evaluation env
eval_env = DummyVecEnv([lambda: gym.make(env_id)])
model_architecture = "PPO"
commit_message = "First PPO LunarLander-v2 trained agent"
# method save, evaluate, generate a model card and record a replay video of your agent before pushing the repo to the hub
package_to_hub(model=model,
model_name=model_name,
model_architecture=model_architecture,
env_id=env_id,
eval_env=eval_env,
repo_id=repo_id,
commit_message=commit_message)
...
```
|
manpreets7/dreamshaper-3.2 | manpreets7 | "2023-03-20T20:51:10Z" | 5 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-03-20T20:45:45Z" | This is DreamShaper 3.32 with baked-in vae and clip fix
You can try it out here: https://sinkin.ai/m/4zdwGOB
Read more about it here: https://civitai.com/models/4384/dreamshaper |
yanaiela/roberta-base-epoch_40 | yanaiela | "2022-07-29T22:53:26Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_40",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-07-28T17:21:04Z" | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_40
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 40
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_40.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
PipEvangelist-USF/my_awesome_qa_model | PipEvangelist-USF | "2023-12-11T06:13:24Z" | 3 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-12-11T00:37:34Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: PipEvangelist-USF/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# PipEvangelist-USF/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5090
- Validation Loss: 1.8129
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.4309 | 2.1239 | 0 |
| 1.7606 | 1.8129 | 1 |
| 1.5090 | 1.8129 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Narzaal/q-FrozenLake-v1-4x4-noSlippery | Narzaal | "2024-06-07T08:26:25Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-07T08:26:22Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Narzaal/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mrm8488/spanish-t5-small-sqac-for-qa | mrm8488 | "2021-09-03T10:22:10Z" | 132 | 4 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"QA",
"Q&A",
"es",
"dataset:BSC-TeMU/SQAC",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
language: es
tags:
- QA
- Q&A
datasets:
- BSC-TeMU/SQAC
widget:
- text: "question: ¿Cuál es el nombre que se le da a la unidad morfológica y funcional de los seres vivos? context: La célula (del latín cellula, diminutivo de cella, ‘celda’) es la unidad morfológica y funcional de todo ser vivo. De hecho, la célula es el elemento de menor tamaño que puede considerarse vivo.\u200b De este modo, puede clasificarse a los organismos vivos según el número de células que posean: si solo tienen una, se les denomina unicelulares (como pueden ser los protozoos o las bacterias, organismos microscópicos); si poseen más, se les llama pluricelulares. En estos últimos el número de células es variable: de unos pocos cientos, como en algunos nematodos, a cientos de billones (1014), como en el caso del ser humano. Las células suelen poseer un tamaño de 10 µm y una masa de 1 ng, si bien existen células mucho mayores."
---
# Spanish T5 (small) fine-tuned on **SQAC** for Spanish **QA** 📖❓
[spanish-T5-small](https://huggingface.co/flax-community/spanish-t5-small) fine-tuned on [SQAC](https://huggingface.co/datasets/BSC-TeMU/SQAC) for **Q&A** downstream task.
## Details of Spanish T5 (small)
T5 (small) like arch trained from scatch on [large_spanish_corpus](https://huggingface.co/datasets/large_spanish_corpus) for **HuggingFace/Flax/Jax Week**.
## Details of the dataset 📚
This dataset contains 6,247 contexts and 18,817 questions with their answers, 1 to 5 for each fragment.
The sources of the contexts are:
* Encyclopedic articles from [Wikipedia in Spanish](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
* News from [Wikinews in Spanish](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/).
* Text from the Spanish corpus [AnCora](http://clic.ub.edu/corpus/en), which is a mix from diferent newswire and literature sources, used under [CC-by licence](https://creativecommons.org/licenses/by/4.0/legalcode).
This dataset can be used to build extractive-QA.
## Results on test dataset 📝
| Metric | # Value |
| ------ | --------- |
| **BLEU** | **41.94** |
## Model in Action 🚀
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
ckpt = 'mrm8488/spanish-t5-small-sqac-for-qa'
tokenizer = AutoTokenizer.from_pretrained(ckpt)
model = T5ForConditionalGeneration.from_pretrained(ckpt).to(device)
def get_answer(question, context):
input_text = 'question: %s context: %s' % (question, context)
features = tokenizer([input_text ], padding='max_length', truncation=True, max_length=512, return_tensors='pt')
output = model.generate(input_ids=features['input_ids'].to(device),
attention_mask=features['attention_mask'].to(device))
return tokenizer.decode(output[0], skip_special_tokens=True)
context = '''
La ex codirectora del grupo de investigación de IA ética de Google, Margaret Mitchell,
quien fue despedida en febrero después de una controversia sobre un artículo crítico del que fue coautora,
se unirá a HuggingFace para ayudar a que los algoritmos de IA sean más justos.
'''
question = '¿Qué hará Margaret Mitchell en HuggingFace?'
print(get_answer(context, question))
# ayudar a que los algoritmos de ia sean más justos
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
addy88/wav2vec2-assamese-stt | addy88 | "2021-12-19T16:55:56Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/addy88/wav2vec2-assamese-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/addy88/wav2vec2-assamese-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
mradermacher/Calme-Ties-78B-i1-GGUF | mradermacher | "2025-02-01T03:40:10Z" | 720 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:prithivMLmods/Calme-Ties-78B",
"base_model:quantized:prithivMLmods/Calme-Ties-78B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-01-30T13:45:01Z" | ---
base_model: prithivMLmods/Calme-Ties-78B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/prithivMLmods/Calme-Ties-78B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Calme-Ties-78B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-IQ1_S.gguf) | i1-IQ1_S | 24.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-IQ1_M.gguf) | i1-IQ1_M | 25.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 27.4 | |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 29.1 | |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-IQ2_S.gguf) | i1-IQ2_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-IQ2_M.gguf) | i1-IQ2_M | 31.5 | |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 31.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q2_K.gguf) | i1-Q2_K | 31.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 34.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 35.2 | |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 36.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-IQ3_S.gguf) | i1-IQ3_S | 37.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-IQ3_M.gguf) | i1-IQ3_M | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 40.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 42.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 42.7 | |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q4_0.gguf) | i1-Q4_0 | 44.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 47.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q4_1.gguf) | i1-Q4_1 | 49.1 | |
| [PART 1](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 50.8 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 55.2 | |
| [PART 1](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 58.4 | |
| [PART 1](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Calme-Ties-78B-i1-GGUF/resolve/main/Calme-Ties-78B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 69.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kiranpantha/whisper-large-v3-nepali-peft-lora-speaker2-rank64-targetxqv-epochs3 | kiranpantha | "2025-01-25T09:38:43Z" | 9 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"ne",
"dataset:kiranpantha/OpenSLR54-Balanced-Nepali",
"base_model:kiranpantha/whisper-large-v3-nepali",
"base_model:adapter:kiranpantha/whisper-large-v3-nepali",
"license:apache-2.0",
"region:us"
] | null | "2025-01-25T09:38:33Z" | ---
library_name: peft
language:
- ne
license: apache-2.0
base_model: kiranpantha/whisper-large-v3-nepali
tags:
- generated_from_trainer
datasets:
- kiranpantha/OpenSLR54-Balanced-Nepali
model-index:
- name: kiranpantha/whisper-large-v3-nepali
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kiranpantha/whisper-large-v3-nepali
This model is a fine-tuned version of [kiranpantha/whisper-large-v3-nepali](https://huggingface.co/kiranpantha/whisper-large-v3-nepali) on the OpenSLR54 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 0.7030 |
| No log | 2.0 | 12 | 0.3471 |
| No log | 3.0 | 18 | 0.2982 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.5.1+cxx11.abi
- Datasets 3.2.0
- Tokenizers 0.21.0 |
ungonzal/deportistas | ungonzal | "2024-04-29T19:23:55Z" | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | "2024-04-19T17:01:09Z" | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
rahil1206/dqn-SpaceInvadersNoFrameskip-v4 | rahil1206 | "2024-04-23T18:41:25Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-04-23T18:40:54Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 555.00 +/- 190.14
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rahil1206 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rahil1206 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rahil1206
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
philip-hightech/e43c5a0c-6274-4936-9dee-85b869ec5f8f | philip-hightech | "2025-01-25T07:36:40Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-1.3B",
"base_model:adapter:EleutherAI/gpt-neo-1.3B",
"license:mit",
"region:us"
] | null | "2025-01-25T07:36:03Z" | ---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-1.3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e43c5a0c-6274-4936-9dee-85b869ec5f8f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/gpt-neo-1.3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 414256b99bc71583_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/414256b99bc71583_train_data.json
type:
field_input: choices
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/e43c5a0c-6274-4936-9dee-85b869ec5f8f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/414256b99bc71583_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a9ee1f6f-a2ee-46d2-8825-32d1c8a14f27
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a9ee1f6f-a2ee-46d2-8825-32d1c8a14f27
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e43c5a0c-6274-4936-9dee-85b869ec5f8f
This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.0719 | 0.0408 | 1 | 3.1101 |
| 7.458 | 0.1224 | 3 | 3.1172 |
| 8.5649 | 0.2449 | 6 | 3.0420 |
| 5.8478 | 0.3673 | 9 | 2.5571 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rafaeloc15/mistral_instruct_generation | rafaeloc15 | "2024-04-09T16:05:38Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | "2024-04-09T15:48:07Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_instruct_generation
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2732 | 0.37 | 20 | 0.1519 |
| 0.0857 | 0.74 | 40 | 0.0748 |
| 0.059 | 1.11 | 60 | 0.0509 |
| 0.0418 | 1.48 | 80 | 0.0395 |
| 0.037 | 1.85 | 100 | 0.0352 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
hjones6315/tokyorevengers3 | hjones6315 | "2025-02-08T02:51:09Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-08T02:48:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
refiners/sd15.text_encoder | refiners | "2024-08-14T13:42:24Z" | 147 | 0 | refiners | [
"refiners",
"safetensors",
"image-to-image",
"stable-diffusion",
"sd1.5",
"art",
"feature-extraction",
"en",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | feature-extraction | "2024-08-08T09:28:54Z" | ---
base_model: runwayml/stable-diffusion-v1-5
base_model_relation: adapter
license: creativeml-openrail-m
language:
- en
library_name: refiners
pipeline_tag: feature-extraction
tags:
- image-to-image
- stable-diffusion
- sd1.5
- art
---
# Stable Diffusion 1.5

## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
|
scvi-tools/tabula-sapiens-prostate-scvi | scvi-tools | "2024-12-08T10:43:28Z" | 0 | 0 | scvi-tools | [
"scvi-tools",
"tensorboard",
"biology",
"genomics",
"single-cell",
"model_cls_name:SCVI",
"scvi_version:1.2.0",
"anndata_version:0.11.1",
"modality:rna",
"tissue:various",
"annotated:True",
"license:cc-by-4.0",
"region:us"
] | null | "2023-03-15T19:27:24Z" | ---
library_name: scvi-tools
license: cc-by-4.0
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCVI
- scvi_version:1.2.0
- anndata_version:0.11.1
- modality:rna
- tissue:various
- annotated:True
---
ScVI is a variational inference model for single-cell RNA-seq data that can learn an underlying
latent space, integrate technical batches and impute dropouts.
The learned low-dimensional latent representation of the data can be used for visualization and
clustering.
scVI takes as input a scRNA-seq gene expression matrix with cells and genes.
We provide an extensive [user guide](https://docs.scvi-tools.org/en/1.2.0/user_guide/models/scvi.html).
- See our original manuscript for further details of the model:
[scVI manuscript](https://www.nature.com/articles/s41592-018-0229-2).
- See our manuscript on [scvi-hub](https://www.biorxiv.org/content/10.1101/2024.03.01.582887v2) how
to leverage pre-trained models.
This model can be used for fine tuning on new data using our Arches framework:
[Arches tutorial](https://docs.scvi-tools.org/en/1.0.0/tutorials/notebooks/scarches_scvi_tools.html).
# Model Description
Tabula Sapiens is a benchmark, first-draft human cell atlas of nearly 500,000 cells from 24 organs of 15 normal human subjects.
# Metrics
We provide here key performance metrics for the uploaded model, if provided by the data uploader.
<details>
<summary><strong>Coefficient of variation</strong></summary>
The cell-wise coefficient of variation summarizes how well variation between different cells is
preserved by the generated model expression. Below a squared Pearson correlation coefficient of 0.4
, we would recommend not to use generated data for downstream analysis, while the generated latent
space might still be useful for analysis.
**Cell-wise Coefficient of Variation**:
| Metric | Training Value | Validation Value |
|-------------------------|----------------|------------------|
| Mean Absolute Error | 1.94 | 2.12 |
| Pearson Correlation | 0.89 | 0.87 |
| Spearman Correlation | 0.88 | 0.86 |
| R² (R-Squared) | 0.76 | 0.71 |
The gene-wise coefficient of variation summarizes how well variation between different genes is
preserved by the generated model expression. This value is usually quite high.
**Gene-wise Coefficient of Variation**:
| Metric | Training Value |
|-------------------------|----------------|
| Mean Absolute Error | 17.66 |
| Pearson Correlation | 0.65 |
| Spearman Correlation | 0.67 |
| R² (R-Squared) | -1.56 |
</details>
<details>
<summary><strong>Differential expression metric</strong></summary>
The differential expression metric provides a summary of the differential expression analysis
between cell types or input clusters. We provide here the F1-score, Pearson Correlation
Coefficient of Log-Foldchanges, Spearman Correlation Coefficient, and Area Under the Precision
Recall Curve (AUPRC) for the differential expression analysis using Wilcoxon Rank Sum test for each
cell-type.
**Differential expression**:
| Index | gene_f1 | lfc_mae | lfc_pearson | lfc_spearman | roc_auc | pr_auc | n_cells |
| --- | --- | --- | --- | --- | --- | --- | --- |
| epithelial cell | 0.96 | 0.88 | 0.64 | 0.90 | 0.13 | 0.85 | 6637.00 |
| basal cell of prostate epithelium | 0.93 | 0.77 | 0.66 | 0.93 | 0.48 | 0.89 | 3198.00 |
| CD8-positive, alpha-beta T cell | 0.92 | 2.89 | 0.64 | 0.83 | 0.30 | 0.82 | 1081.00 |
| endothelial cell | 0.81 | 3.28 | 0.61 | 0.81 | 0.49 | 0.86 | 471.00 |
| mature NK T cell | 0.87 | 4.37 | 0.56 | 0.73 | 0.35 | 0.78 | 430.00 |
| macrophage | 0.87 | 3.15 | 0.66 | 0.83 | 0.44 | 0.84 | 317.00 |
| club cell | 0.77 | 2.33 | 0.60 | 0.79 | 0.51 | 0.80 | 290.00 |
| smooth muscle cell | 0.83 | 3.19 | 0.64 | 0.81 | 0.50 | 0.83 | 285.00 |
| fibroblast | 0.85 | 3.50 | 0.67 | 0.79 | 0.47 | 0.80 | 207.00 |
| luminal cell of prostate epithelium | 0.70 | 4.12 | 0.62 | 0.71 | 0.48 | 0.76 | 60.00 |
| erythroid progenitor cell | 0.43 | 5.23 | 0.39 | 0.36 | 0.38 | 0.95 | 52.00 |
| monocyte | 0.62 | 4.99 | 0.71 | 0.80 | 0.22 | 0.65 | 33.00 |
| stromal cell | 0.32 | 7.62 | 0.47 | 0.53 | 0.23 | 0.47 | 32.00 |
</details>
# Model Properties
We provide here key parameters used to setup and train the model.
<details>
<summary><strong>Model Parameters</strong></summary>
These provide the settings to setup the original model:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
</details>
<details>
<summary><strong>Setup Data Arguments</strong></summary>
Arguments passed to setup_anndata of the original model:
```json
{
"layer": null,
"batch_key": "donor_assay",
"labels_key": "cell_ontology_class",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
</details>
<details>
<summary><strong>Data Registry</strong></summary>
Registry elements for AnnData manager:
| Registry Key | scvi-tools Location |
|-------------------|--------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['scvi_latent_qzm'] |
| latent_qzv | adata.obsm['scvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['observed_lib_size'] |
- **Data is Minified**: False
</details>
<details>
<summary><strong>Summary Statistics</strong></summary>
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 2 |
| n_cells | 13093 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 13 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 3000 |
</details>
<details>
<summary><strong>Training</strong></summary>
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the
scvi-tools documentation for details. -->
**Training data url**: Not provided by uploader
If provided by the original uploader, for those interested in understanding or replicating the
training process, the code is available at the link below.
**Training Code URL**: https://github.com/YosefLab/scvi-hub-models/blob/main/src/scvi_hub_models/TS_train_all_tissues.ipynb
</details>
# References
The Tabula Sapiens Consortium. The Tabula Sapiens: A multiple-organ, single-cell transcriptomic atlas of humans. Science, May 2022. doi:10.1126/science.abl4896
|
CinnabonMan/huh | CinnabonMan | "2023-05-16T09:38:25Z" | 17 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2022-12-19T16:01:00Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### huh Dreambooth model trained by CinnabonMan with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
DewiBrynJones/wav2vec2-xlsr-53-ft-btb-ccv-enc-cy | DewiBrynJones | "2024-06-26T13:18:56Z" | 20 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"DewiBrynJones/banc-trawsgrifiadau-bangor-clean-with-ccv",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-26T07:50:36Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- automatic-speech-recognition
- DewiBrynJones/banc-trawsgrifiadau-bangor-clean-with-ccv
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-53-ft-btb-ccv-enc-cy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-ft-btb-ccv-enc-cy
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the DEWIBRYNJONES/BANC-TRAWSGRIFIADAU-BANGOR-CLEAN-WITH-CCV - DEFAULT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4095
- Wer: 0.3271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| No log | 0.0194 | 100 | 3.5475 | 1.0 |
| No log | 0.0387 | 200 | 3.0259 | 1.0 |
| No log | 0.0581 | 300 | 3.0887 | 1.0 |
| No log | 0.0774 | 400 | 2.3822 | 0.9972 |
| 4.0938 | 0.0968 | 500 | 1.4547 | 0.9020 |
| 4.0938 | 0.1161 | 600 | 1.2603 | 0.8510 |
| 4.0938 | 0.1355 | 700 | 1.0940 | 0.7655 |
| 4.0938 | 0.1549 | 800 | 1.0705 | 0.7602 |
| 4.0938 | 0.1742 | 900 | 0.9356 | 0.6973 |
| 1.0597 | 0.1936 | 1000 | 0.9104 | 0.6766 |
| 1.0597 | 0.2129 | 1100 | 0.8879 | 0.6570 |
| 1.0597 | 0.2323 | 1200 | 0.8595 | 0.6612 |
| 1.0597 | 0.2516 | 1300 | 0.8352 | 0.6075 |
| 1.0597 | 0.2710 | 1400 | 0.7912 | 0.6033 |
| 0.8484 | 0.2904 | 1500 | 0.7862 | 0.6067 |
| 0.8484 | 0.3097 | 1600 | 0.7790 | 0.6009 |
| 0.8484 | 0.3291 | 1700 | 0.7678 | 0.5629 |
| 0.8484 | 0.3484 | 1800 | 0.7515 | 0.5799 |
| 0.8484 | 0.3678 | 1900 | 0.7424 | 0.5859 |
| 0.764 | 0.3871 | 2000 | 0.7130 | 0.5521 |
| 0.764 | 0.4065 | 2100 | 0.7114 | 0.5408 |
| 0.764 | 0.4259 | 2200 | 0.7229 | 0.5577 |
| 0.764 | 0.4452 | 2300 | 0.6773 | 0.5160 |
| 0.764 | 0.4646 | 2400 | 0.6784 | 0.5178 |
| 0.6868 | 0.4839 | 2500 | 0.6720 | 0.5262 |
| 0.6868 | 0.5033 | 2600 | 0.6804 | 0.5337 |
| 0.6868 | 0.5226 | 2700 | 0.6599 | 0.5024 |
| 0.6868 | 0.5420 | 2800 | 0.6287 | 0.4902 |
| 0.6868 | 0.5614 | 2900 | 0.6304 | 0.4947 |
| 0.6761 | 0.5807 | 3000 | 0.6258 | 0.4851 |
| 0.6761 | 0.6001 | 3100 | 0.6311 | 0.4990 |
| 0.6761 | 0.6194 | 3200 | 0.6172 | 0.4901 |
| 0.6761 | 0.6388 | 3300 | 0.6187 | 0.4666 |
| 0.6761 | 0.6581 | 3400 | 0.6045 | 0.4725 |
| 0.6462 | 0.6775 | 3500 | 0.5950 | 0.4717 |
| 0.6462 | 0.6969 | 3600 | 0.5903 | 0.4602 |
| 0.6462 | 0.7162 | 3700 | 0.5865 | 0.4727 |
| 0.6462 | 0.7356 | 3800 | 0.5820 | 0.4590 |
| 0.6462 | 0.7549 | 3900 | 0.6026 | 0.4830 |
| 0.6193 | 0.7743 | 4000 | 0.5807 | 0.4496 |
| 0.6193 | 0.7937 | 4100 | 0.5621 | 0.4486 |
| 0.6193 | 0.8130 | 4200 | 0.5730 | 0.4593 |
| 0.6193 | 0.8324 | 4300 | 0.5592 | 0.4374 |
| 0.6193 | 0.8517 | 4400 | 0.5621 | 0.4239 |
| 0.59 | 0.8711 | 4500 | 0.5458 | 0.4304 |
| 0.59 | 0.8904 | 4600 | 0.5406 | 0.4271 |
| 0.59 | 0.9098 | 4700 | 0.5269 | 0.4132 |
| 0.59 | 0.9292 | 4800 | 0.5362 | 0.4215 |
| 0.59 | 0.9485 | 4900 | 0.5226 | 0.4163 |
| 0.5636 | 0.9679 | 5000 | 0.5297 | 0.4148 |
| 0.5636 | 0.9872 | 5100 | 0.5226 | 0.4136 |
| 0.5636 | 1.0066 | 5200 | 0.5239 | 0.4054 |
| 0.5636 | 1.0259 | 5300 | 0.5383 | 0.4058 |
| 0.5636 | 1.0453 | 5400 | 0.5125 | 0.4067 |
| 0.4924 | 1.0647 | 5500 | 0.5029 | 0.3953 |
| 0.4924 | 1.0840 | 5600 | 0.5054 | 0.3932 |
| 0.4924 | 1.1034 | 5700 | 0.4969 | 0.3894 |
| 0.4924 | 1.1227 | 5800 | 0.4935 | 0.3851 |
| 0.4924 | 1.1421 | 5900 | 0.4977 | 0.3817 |
| 0.4602 | 1.1614 | 6000 | 0.4863 | 0.3874 |
| 0.4602 | 1.1808 | 6100 | 0.4906 | 0.3777 |
| 0.4602 | 1.2002 | 6200 | 0.4891 | 0.3764 |
| 0.4602 | 1.2195 | 6300 | 0.4881 | 0.3801 |
| 0.4602 | 1.2389 | 6400 | 0.4814 | 0.3727 |
| 0.4407 | 1.2582 | 6500 | 0.4714 | 0.3772 |
| 0.4407 | 1.2776 | 6600 | 0.4739 | 0.3706 |
| 0.4407 | 1.2969 | 6700 | 0.4692 | 0.3714 |
| 0.4407 | 1.3163 | 6800 | 0.4673 | 0.3728 |
| 0.4407 | 1.3357 | 6900 | 0.4610 | 0.3678 |
| 0.4284 | 1.3550 | 7000 | 0.4730 | 0.3653 |
| 0.4284 | 1.3744 | 7100 | 0.4606 | 0.3640 |
| 0.4284 | 1.3937 | 7200 | 0.4572 | 0.3620 |
| 0.4284 | 1.4131 | 7300 | 0.4575 | 0.3630 |
| 0.4284 | 1.4324 | 7400 | 0.4578 | 0.3590 |
| 0.4299 | 1.4518 | 7500 | 0.4477 | 0.3569 |
| 0.4299 | 1.4712 | 7600 | 0.4442 | 0.3552 |
| 0.4299 | 1.4905 | 7700 | 0.4420 | 0.3546 |
| 0.4299 | 1.5099 | 7800 | 0.4437 | 0.3483 |
| 0.4299 | 1.5292 | 7900 | 0.4373 | 0.3486 |
| 0.408 | 1.5486 | 8000 | 0.4336 | 0.3464 |
| 0.408 | 1.5679 | 8100 | 0.4348 | 0.3448 |
| 0.408 | 1.5873 | 8200 | 0.4276 | 0.3418 |
| 0.408 | 1.6067 | 8300 | 0.4294 | 0.3399 |
| 0.408 | 1.6260 | 8400 | 0.4272 | 0.3388 |
| 0.3964 | 1.6454 | 8500 | 0.4311 | 0.3409 |
| 0.3964 | 1.6647 | 8600 | 0.4260 | 0.3381 |
| 0.3964 | 1.6841 | 8700 | 0.4260 | 0.3371 |
| 0.3964 | 1.7034 | 8800 | 0.4260 | 0.3364 |
| 0.3964 | 1.7228 | 8900 | 0.4215 | 0.3351 |
| 0.3866 | 1.7422 | 9000 | 0.4234 | 0.3330 |
| 0.3866 | 1.7615 | 9100 | 0.4210 | 0.3319 |
| 0.3866 | 1.7809 | 9200 | 0.4156 | 0.3301 |
| 0.3866 | 1.8002 | 9300 | 0.4158 | 0.3303 |
| 0.3866 | 1.8196 | 9400 | 0.4155 | 0.3294 |
| 0.37 | 1.8389 | 9500 | 0.4137 | 0.3292 |
| 0.37 | 1.8583 | 9600 | 0.4120 | 0.3284 |
| 0.37 | 1.8777 | 9700 | 0.4109 | 0.3301 |
| 0.37 | 1.8970 | 9800 | 0.4100 | 0.3279 |
| 0.37 | 1.9164 | 9900 | 0.4095 | 0.3267 |
| 0.371 | 1.9357 | 10000 | 0.4095 | 0.3271 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
vertings6/8b17cb3d-c5c8-4058-9005-2fe2eef20bff | vertings6 | "2025-01-21T00:20:47Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceH4/tiny-random-LlamaForCausalLM",
"base_model:adapter:HuggingFaceH4/tiny-random-LlamaForCausalLM",
"region:us"
] | null | "2025-01-21T00:19:29Z" | ---
library_name: peft
base_model: HuggingFaceH4/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8b17cb3d-c5c8-4058-9005-2fe2eef20bff
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceH4/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 645552bf4fff83a8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/645552bf4fff83a8_train_data.json
type:
field_input: image_id
field_instruction: ori_category_id
field_output: caption
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: vertings6/8b17cb3d-c5c8-4058-9005-2fe2eef20bff
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/645552bf4fff83a8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2655cf00-8393-42bb-afc7-ec4abb9c509f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2655cf00-8393-42bb-afc7-ec4abb9c509f
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 8b17cb3d-c5c8-4058-9005-2fe2eef20bff
This model is a fine-tuned version of [HuggingFaceH4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceH4/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 10.3756 |
| 10.3795 | 0.0019 | 5 | 10.3745 |
| 10.3742 | 0.0037 | 10 | 10.3717 |
| 10.3719 | 0.0056 | 15 | 10.3692 |
| 10.3698 | 0.0074 | 20 | 10.3675 |
| 10.3685 | 0.0093 | 25 | 10.3667 |
| 10.3674 | 0.0112 | 30 | 10.3666 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
damgomz/ft_1_4e6_base_x12 | damgomz | "2024-06-19T07:02:26Z" | 10 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-17T15:37:07Z" | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 148163.46079468727 |
| Emissions (Co2eq in kg) | 0.0896559763128698 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 1.7491481642598954 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.154335335302104 |
| Consumed energy (kWh) | 1.9034834995620016 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.285214662029773 |
| Emissions (Co2eq in kg) | 0.05803068881125252 |
## Note
14 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_1_4e6_base_x12 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 4e-06 |
| batch_size | 1 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.720573 | 0.337178 |
| 1 | 0.320833 | 0.260752 | 0.912109 |
| 2 | 0.216310 | 0.220284 | 0.910982 |
| 3 | 0.173920 | 0.226664 | 0.925075 |
| 4 | 0.135464 | 0.232773 | 0.914038 |
| 5 | 0.099874 | 0.262507 | 0.917761 |
| 6 | 0.072070 | 0.289936 | 0.917953 |
|
shreyashvora/ft_checkpoint-checkpoint-0-3 | shreyashvora | "2025-01-17T14:08:38Z" | 5 | 0 | null | [
"pytorch",
"distilbert",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | "2025-01-17T13:34:27Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: ft_checkpoint-checkpoint-0-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft_checkpoint-checkpoint-0-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0786
- Accuracy: 0.6551
- F1: 0.6135
- Precision: 0.6164
- Recall: 0.6551
- Accuracy Label Country: 0.0
- Accuracy Label Electronic: 0.0
- Accuracy Label Folk: 0.0
- Accuracy Label Hip-hop: 0.4752
- Accuracy Label Indie: 0.0
- Accuracy Label Jazz: 0.0
- Accuracy Label Metal: 0.3131
- Accuracy Label Pop: 0.7003
- Accuracy Label R&b: 0.0437
- Accuracy Label Rock: 0.8725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Accuracy Label Country | Accuracy Label Electronic | Accuracy Label Folk | Accuracy Label Hip-hop | Accuracy Label Indie | Accuracy Label Jazz | Accuracy Label Metal | Accuracy Label Pop | Accuracy Label R&b | Accuracy Label Rock |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:----------------------:|:-------------------------:|:-------------------:|:----------------------:|:--------------------:|:-------------------:|:--------------------:|:------------------:|:------------------:|:-------------------:|
| 0.9132 | 0.12 | 125 | 1.4600 | 0.4846 | 0.4137 | 0.4717 | 0.4846 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3353 | 0.0 | 0.9259 |
| 0.855 | 0.24 | 250 | 1.3577 | 0.5521 | 0.4918 | 0.4888 | 0.5521 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5242 | 0.0 | 0.8805 |
| 0.8017 | 0.36 | 375 | 1.2138 | 0.6105 | 0.5609 | 0.5556 | 0.6105 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1804 | 0.6784 | 0.0 | 0.8205 |
| 0.8123 | 0.48 | 500 | 1.1672 | 0.6061 | 0.5597 | 0.5676 | 0.6061 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2402 | 0.6166 | 0.0437 | 0.8720 |
| 0.7585 | 0.6 | 625 | 1.1320 | 0.6375 | 0.5883 | 0.5935 | 0.6375 | 0.0 | 0.0 | 0.0 | 0.1777 | 0.0 | 0.0 | 0.1907 | 0.7586 | 0.0164 | 0.7873 |
| 0.7096 | 0.73 | 750 | 1.0917 | 0.6472 | 0.6027 | 0.6000 | 0.6472 | 0.0 | 0.0 | 0.0 | 0.1860 | 0.0 | 0.0 | 0.2944 | 0.7254 | 0.0328 | 0.8353 |
| 0.6976 | 0.85 | 875 | 1.0756 | 0.6561 | 0.6149 | 0.6101 | 0.6561 | 0.0 | 0.0 | 0.0 | 0.5579 | 0.0 | 0.0 | 0.3327 | 0.7099 | 0.0273 | 0.8574 |
| 0.7036 | 0.97 | 1000 | 1.0786 | 0.6551 | 0.6135 | 0.6164 | 0.6551 | 0.0 | 0.0 | 0.0 | 0.4752 | 0.0 | 0.0 | 0.3131 | 0.7003 | 0.0437 | 0.8725 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.13.3
|
stablediffusionapi/corcelio | stablediffusionapi | "2024-06-04T17:28:58Z" | 0 | 1 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-04T17:23:01Z" | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Corcelio API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "corcelio"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/corcelio)
Model link: [View model](https://modelslab.com/models/corcelio)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "corcelio",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
TrungPT/q-FrozenLake-v1-4x4-noSlippery | TrungPT | "2023-12-25T17:01:19Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-25T17:01:16Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="TrungPT/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AtomGradient/text_classification_inner_lab | AtomGradient | "2023-06-19T07:21:43Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-06-19T07:07:17Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93208
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2302
- Accuracy: 0.9321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2258 | 1.0 | 1563 | 0.2223 | 0.9202 |
| 0.1543 | 2.0 | 3126 | 0.2302 | 0.9321 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
MaziyarPanahi/YamshadowStrangemerges_32_T3qExperiment26 | MaziyarPanahi | "2024-04-08T14:21:57Z" | 18 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"base_model:automerger/T3qExperiment26-7B",
"base_model:merge:automerger/T3qExperiment26-7B",
"base_model:automerger/YamshadowStrangemerges_32-7B",
"base_model:merge:automerger/YamshadowStrangemerges_32-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-04-08T14:11:01Z" | ---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: YamshadowStrangemerges_32_T3qExperiment26
base_model:
- automerger/YamshadowStrangemerges_32-7B
- automerger/T3qExperiment26-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# YamshadowStrangemerges_32_T3qExperiment26
YamshadowStrangemerges_32_T3qExperiment26 is a merge of the following models:
* [automerger/YamshadowStrangemerges_32-7B](https://huggingface.co/automerger/YamshadowStrangemerges_32-7B)
* [automerger/T3qExperiment26-7B](https://huggingface.co/automerger/T3qExperiment26-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/YamshadowStrangemerges_32_T3qExperiment26"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
qgallouedec/td3-Walker2DBulletEnv-v0-1035828328 | qgallouedec | "2024-04-06T13:57:16Z" | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"Walker2DBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-27T15:17:05Z" | ---
library_name: stable-baselines3
tags:
- Walker2DBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TD3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2DBulletEnv-v0
type: Walker2DBulletEnv-v0
metrics:
- type: mean_reward
value: 2528.44 +/- 22.08
name: mean_reward
verified: false
---
# **TD3** Agent playing **Walker2DBulletEnv-v0**
This is a trained model of a **TD3** agent playing **Walker2DBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo td3 --env Walker2DBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo td3 --env Walker2DBulletEnv-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo td3 --env Walker2DBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo td3 --env Walker2DBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo td3 --env Walker2DBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo td3 --env Walker2DBulletEnv-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('buffer_size', 200000),
('gamma', 0.98),
('gradient_steps', -1),
('learning_rate', 0.001),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[400, 300])'),
('train_freq', [1, 'episode']),
('normalize', False)])
```
|
robiulawaldev/4f5e4e93-9058-4d28-88b7-c5f218c6bf3d | robiulawaldev | "2025-01-27T13:07:56Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | "2025-01-27T13:05:58Z" | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4f5e4e93-9058-4d28-88b7-c5f218c6bf3d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9d1482d5829080f2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9d1482d5829080f2_train_data.json
type:
field_instruction: prompt
field_output: chosen_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiulawaldev/4f5e4e93-9058-4d28-88b7-c5f218c6bf3d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: constant
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/9d1482d5829080f2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e343376e-c5c4-45db-8381-b53a6ae62c4a
wandb_project: Birthday-SN56-35-Gradients-On-Demand
wandb_run: your_name
wandb_runid: e343376e-c5c4-45db-8381-b53a6ae62c4a
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4f5e4e93-9058-4d28-88b7-c5f218c6bf3d
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 0.8413 |
| 1.6412 | 0.0074 | 13 | 0.7270 |
| 1.583 | 0.0148 | 26 | 0.7058 |
| 1.4765 | 0.0222 | 39 | 0.6943 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tuantmdev/46764790-ee0e-4210-a40d-9b4d05544a5e | tuantmdev | "2025-02-06T12:25:27Z" | 18 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-06T12:17:52Z" | ---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 46764790-ee0e-4210-a40d-9b4d05544a5e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8df66764bf488e23_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8df66764bf488e23_train_data.json
type:
field_input: my_solu
field_instruction: question
field_output: solution
field_system: ''
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuantmdev/46764790-ee0e-4210-a40d-9b4d05544a5e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/8df66764bf488e23_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_strategy: best
saves_per_epoch: 5
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a6de568f-26c4-435b-bbd4-aae37178c35b
wandb_project: Gradients-On-Demand
wandb_run: unknown
wandb_runid: a6de568f-26c4-435b-bbd4-aae37178c35b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 46764790-ee0e-4210-a40d-9b4d05544a5e
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | nan |
| 0.0 | 0.0175 | 10 | nan |
| 0.0 | 0.0351 | 20 | nan |
| 0.0496 | 0.0526 | 30 | nan |
| 0.0 | 0.0702 | 40 | nan |
| 0.0 | 0.0877 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MayBashendy/ArabicNewSplits7_B_usingALLEssays_FineTuningAraBERT_run1_AugV5_k15_task1_organization | MayBashendy | "2025-01-18T02:10:45Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-17T22:38:05Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_B_usingALLEssays_FineTuningAraBERT_run1_AugV5_k15_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_B_usingALLEssays_FineTuningAraBERT_run1_AugV5_k15_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6748
- Qwk: 0.7015
- Mse: 0.6748
- Rmse: 0.8214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0282 | 2 | 6.8284 | 0.0239 | 6.8284 | 2.6131 |
| No log | 0.0563 | 4 | 4.5847 | 0.0772 | 4.5847 | 2.1412 |
| No log | 0.0845 | 6 | 3.2874 | -0.0117 | 3.2874 | 1.8131 |
| No log | 0.1127 | 8 | 2.5326 | 0.0580 | 2.5326 | 1.5914 |
| No log | 0.1408 | 10 | 2.1036 | 0.0847 | 2.1036 | 1.4504 |
| No log | 0.1690 | 12 | 1.7942 | 0.1273 | 1.7942 | 1.3395 |
| No log | 0.1972 | 14 | 2.2051 | 0.1471 | 2.2051 | 1.4850 |
| No log | 0.2254 | 16 | 1.8886 | 0.2564 | 1.8886 | 1.3743 |
| No log | 0.2535 | 18 | 1.9081 | 0.3279 | 1.9081 | 1.3814 |
| No log | 0.2817 | 20 | 1.6213 | 0.2281 | 1.6213 | 1.2733 |
| No log | 0.3099 | 22 | 1.6032 | 0.2881 | 1.6032 | 1.2662 |
| No log | 0.3380 | 24 | 1.4841 | 0.3590 | 1.4841 | 1.2182 |
| No log | 0.3662 | 26 | 1.3783 | 0.3009 | 1.3783 | 1.1740 |
| No log | 0.3944 | 28 | 1.3763 | 0.3833 | 1.3763 | 1.1732 |
| No log | 0.4225 | 30 | 1.3182 | 0.4733 | 1.3182 | 1.1481 |
| No log | 0.4507 | 32 | 1.2517 | 0.5 | 1.2517 | 1.1188 |
| No log | 0.4789 | 34 | 1.3285 | 0.4733 | 1.3285 | 1.1526 |
| No log | 0.5070 | 36 | 1.4489 | 0.4898 | 1.4489 | 1.2037 |
| No log | 0.5352 | 38 | 1.4877 | 0.4898 | 1.4877 | 1.2197 |
| No log | 0.5634 | 40 | 1.4065 | 0.5103 | 1.4065 | 1.1860 |
| No log | 0.5915 | 42 | 1.5065 | 0.4762 | 1.5065 | 1.2274 |
| No log | 0.6197 | 44 | 1.9121 | 0.4444 | 1.9121 | 1.3828 |
| No log | 0.6479 | 46 | 2.1736 | 0.3017 | 2.1736 | 1.4743 |
| No log | 0.6761 | 48 | 2.0598 | 0.3468 | 2.0598 | 1.4352 |
| No log | 0.7042 | 50 | 1.6580 | 0.4937 | 1.6580 | 1.2876 |
| No log | 0.7324 | 52 | 1.3973 | 0.5067 | 1.3973 | 1.1821 |
| No log | 0.7606 | 54 | 1.3256 | 0.5147 | 1.3256 | 1.1513 |
| No log | 0.7887 | 56 | 1.0689 | 0.6014 | 1.0689 | 1.0339 |
| No log | 0.8169 | 58 | 1.0468 | 0.6323 | 1.0468 | 1.0231 |
| No log | 0.8451 | 60 | 0.9586 | 0.6531 | 0.9586 | 0.9791 |
| No log | 0.8732 | 62 | 0.9158 | 0.6763 | 0.9158 | 0.9570 |
| No log | 0.9014 | 64 | 1.2193 | 0.4677 | 1.2193 | 1.1042 |
| No log | 0.9296 | 66 | 1.2355 | 0.5079 | 1.2355 | 1.1115 |
| No log | 0.9577 | 68 | 0.7880 | 0.7123 | 0.7880 | 0.8877 |
| No log | 0.9859 | 70 | 1.4157 | 0.5780 | 1.4157 | 1.1898 |
| No log | 1.0141 | 72 | 2.2375 | 0.4476 | 2.2375 | 1.4958 |
| No log | 1.0423 | 74 | 1.7493 | 0.5 | 1.7493 | 1.3226 |
| No log | 1.0704 | 76 | 0.8839 | 0.6538 | 0.8839 | 0.9401 |
| No log | 1.0986 | 78 | 0.7321 | 0.7483 | 0.7321 | 0.8556 |
| No log | 1.1268 | 80 | 0.7393 | 0.7234 | 0.7393 | 0.8598 |
| No log | 1.1549 | 82 | 0.7579 | 0.7092 | 0.7579 | 0.8706 |
| No log | 1.1831 | 84 | 0.7076 | 0.7152 | 0.7076 | 0.8412 |
| No log | 1.2113 | 86 | 0.7829 | 0.7125 | 0.7829 | 0.8848 |
| No log | 1.2394 | 88 | 0.8603 | 0.7030 | 0.8603 | 0.9275 |
| No log | 1.2676 | 90 | 0.8543 | 0.6957 | 0.8543 | 0.9243 |
| No log | 1.2958 | 92 | 0.8634 | 0.675 | 0.8634 | 0.9292 |
| No log | 1.3239 | 94 | 0.7745 | 0.7516 | 0.7745 | 0.8801 |
| No log | 1.3521 | 96 | 0.7257 | 0.7342 | 0.7257 | 0.8519 |
| No log | 1.3803 | 98 | 0.7450 | 0.7143 | 0.7450 | 0.8631 |
| No log | 1.4085 | 100 | 0.6965 | 0.7925 | 0.6965 | 0.8346 |
| No log | 1.4366 | 102 | 1.0669 | 0.6512 | 1.0669 | 1.0329 |
| No log | 1.4648 | 104 | 1.3621 | 0.5506 | 1.3621 | 1.1671 |
| No log | 1.4930 | 106 | 1.0119 | 0.6627 | 1.0119 | 1.0059 |
| No log | 1.5211 | 108 | 0.7427 | 0.7134 | 0.7427 | 0.8618 |
| No log | 1.5493 | 110 | 0.7570 | 0.7114 | 0.7570 | 0.8701 |
| No log | 1.5775 | 112 | 0.8250 | 0.7123 | 0.8250 | 0.9083 |
| No log | 1.6056 | 114 | 0.7339 | 0.7105 | 0.7339 | 0.8567 |
| No log | 1.6338 | 116 | 0.6620 | 0.7484 | 0.6620 | 0.8137 |
| No log | 1.6620 | 118 | 0.6919 | 0.7333 | 0.6919 | 0.8318 |
| No log | 1.6901 | 120 | 0.7974 | 0.6809 | 0.7974 | 0.8930 |
| No log | 1.7183 | 122 | 0.7534 | 0.7133 | 0.7534 | 0.8680 |
| No log | 1.7465 | 124 | 0.8260 | 0.6667 | 0.8260 | 0.9088 |
| No log | 1.7746 | 126 | 0.9691 | 0.6573 | 0.9691 | 0.9844 |
| No log | 1.8028 | 128 | 1.2706 | 0.6076 | 1.2706 | 1.1272 |
| No log | 1.8310 | 130 | 1.1738 | 0.6296 | 1.1738 | 1.0834 |
| No log | 1.8592 | 132 | 0.7725 | 0.7075 | 0.7725 | 0.8789 |
| No log | 1.8873 | 134 | 0.6599 | 0.7671 | 0.6599 | 0.8123 |
| No log | 1.9155 | 136 | 0.9657 | 0.6176 | 0.9657 | 0.9827 |
| No log | 1.9437 | 138 | 1.3075 | 0.5512 | 1.3075 | 1.1435 |
| No log | 1.9718 | 140 | 1.1479 | 0.5909 | 1.1479 | 1.0714 |
| No log | 2.0 | 142 | 0.7742 | 0.6667 | 0.7742 | 0.8799 |
| No log | 2.0282 | 144 | 0.8991 | 0.6667 | 0.8991 | 0.9482 |
| No log | 2.0563 | 146 | 0.8522 | 0.6667 | 0.8522 | 0.9232 |
| No log | 2.0845 | 148 | 0.7513 | 0.6897 | 0.7513 | 0.8668 |
| No log | 2.1127 | 150 | 0.6820 | 0.7724 | 0.6820 | 0.8258 |
| No log | 2.1408 | 152 | 0.6503 | 0.7808 | 0.6503 | 0.8064 |
| No log | 2.1690 | 154 | 0.6224 | 0.8205 | 0.6224 | 0.7889 |
| No log | 2.1972 | 156 | 0.6260 | 0.8121 | 0.6260 | 0.7912 |
| No log | 2.2254 | 158 | 0.6574 | 0.8024 | 0.6574 | 0.8108 |
| No log | 2.2535 | 160 | 0.6256 | 0.8193 | 0.6256 | 0.7909 |
| No log | 2.2817 | 162 | 0.6940 | 0.76 | 0.6940 | 0.8331 |
| No log | 2.3099 | 164 | 0.7523 | 0.7347 | 0.7523 | 0.8674 |
| No log | 2.3380 | 166 | 0.8291 | 0.7006 | 0.8291 | 0.9106 |
| No log | 2.3662 | 168 | 1.0191 | 0.6585 | 1.0191 | 1.0095 |
| No log | 2.3944 | 170 | 0.9327 | 0.6871 | 0.9327 | 0.9657 |
| No log | 2.4225 | 172 | 0.7042 | 0.7975 | 0.7042 | 0.8392 |
| No log | 2.4507 | 174 | 0.7423 | 0.7550 | 0.7423 | 0.8616 |
| No log | 2.4789 | 176 | 0.7977 | 0.6939 | 0.7977 | 0.8931 |
| No log | 2.5070 | 178 | 0.7738 | 0.7152 | 0.7738 | 0.8797 |
| No log | 2.5352 | 180 | 0.6924 | 0.7821 | 0.6924 | 0.8321 |
| No log | 2.5634 | 182 | 0.6789 | 0.8263 | 0.6789 | 0.8240 |
| No log | 2.5915 | 184 | 0.6958 | 0.8121 | 0.6958 | 0.8342 |
| No log | 2.6197 | 186 | 0.6921 | 0.7733 | 0.6921 | 0.8319 |
| No log | 2.6479 | 188 | 0.7037 | 0.7483 | 0.7037 | 0.8389 |
| No log | 2.6761 | 190 | 0.6854 | 0.7919 | 0.6854 | 0.8279 |
| No log | 2.7042 | 192 | 0.6580 | 0.8129 | 0.6580 | 0.8112 |
| No log | 2.7324 | 194 | 0.6326 | 0.8280 | 0.6326 | 0.7954 |
| No log | 2.7606 | 196 | 0.6293 | 0.8375 | 0.6293 | 0.7933 |
| No log | 2.7887 | 198 | 0.6149 | 0.8101 | 0.6149 | 0.7841 |
| No log | 2.8169 | 200 | 0.7139 | 0.7361 | 0.7139 | 0.8449 |
| No log | 2.8451 | 202 | 0.7376 | 0.7133 | 0.7376 | 0.8588 |
| No log | 2.8732 | 204 | 0.6931 | 0.7606 | 0.6931 | 0.8325 |
| No log | 2.9014 | 206 | 0.7521 | 0.7536 | 0.7521 | 0.8672 |
| No log | 2.9296 | 208 | 0.8776 | 0.5942 | 0.8776 | 0.9368 |
| No log | 2.9577 | 210 | 0.8764 | 0.6176 | 0.8764 | 0.9362 |
| No log | 2.9859 | 212 | 0.7681 | 0.7445 | 0.7681 | 0.8764 |
| No log | 3.0141 | 214 | 0.8025 | 0.7286 | 0.8025 | 0.8958 |
| No log | 3.0423 | 216 | 0.9277 | 0.6429 | 0.9277 | 0.9632 |
| No log | 3.0704 | 218 | 0.8339 | 0.6803 | 0.8339 | 0.9132 |
| No log | 3.0986 | 220 | 0.7127 | 0.7625 | 0.7127 | 0.8442 |
| No log | 3.1268 | 222 | 0.6401 | 0.7950 | 0.6401 | 0.8000 |
| No log | 3.1549 | 224 | 0.6247 | 0.8256 | 0.6247 | 0.7903 |
| No log | 3.1831 | 226 | 0.6302 | 0.8187 | 0.6302 | 0.7939 |
| No log | 3.2113 | 228 | 0.6760 | 0.7662 | 0.6760 | 0.8222 |
| No log | 3.2394 | 230 | 0.7648 | 0.7162 | 0.7648 | 0.8745 |
| No log | 3.2676 | 232 | 0.7890 | 0.7324 | 0.7890 | 0.8882 |
| No log | 3.2958 | 234 | 0.8307 | 0.7092 | 0.8307 | 0.9114 |
| No log | 3.3239 | 236 | 0.7605 | 0.7417 | 0.7605 | 0.8721 |
| No log | 3.3521 | 238 | 0.7184 | 0.7654 | 0.7184 | 0.8476 |
| No log | 3.3803 | 240 | 0.7077 | 0.7595 | 0.7077 | 0.8412 |
| No log | 3.4085 | 242 | 0.6715 | 0.7368 | 0.6715 | 0.8194 |
| No log | 3.4366 | 244 | 0.7692 | 0.7133 | 0.7692 | 0.8770 |
| No log | 3.4648 | 246 | 0.7637 | 0.7034 | 0.7637 | 0.8739 |
| No log | 3.4930 | 248 | 0.7098 | 0.7383 | 0.7098 | 0.8425 |
| No log | 3.5211 | 250 | 0.6425 | 0.7662 | 0.6425 | 0.8016 |
| No log | 3.5493 | 252 | 0.6062 | 0.7898 | 0.6062 | 0.7786 |
| No log | 3.5775 | 254 | 0.6108 | 0.7975 | 0.6108 | 0.7815 |
| No log | 3.6056 | 256 | 0.6164 | 0.7815 | 0.6164 | 0.7851 |
| No log | 3.6338 | 258 | 0.6228 | 0.7867 | 0.6228 | 0.7892 |
| No log | 3.6620 | 260 | 0.6264 | 0.7550 | 0.6264 | 0.7914 |
| No log | 3.6901 | 262 | 0.7816 | 0.7020 | 0.7816 | 0.8841 |
| No log | 3.7183 | 264 | 0.7823 | 0.7020 | 0.7823 | 0.8845 |
| No log | 3.7465 | 266 | 0.7141 | 0.7582 | 0.7141 | 0.8450 |
| No log | 3.7746 | 268 | 0.6448 | 0.8 | 0.6448 | 0.8030 |
| No log | 3.8028 | 270 | 0.5718 | 0.8182 | 0.5718 | 0.7561 |
| No log | 3.8310 | 272 | 0.6071 | 0.8182 | 0.6071 | 0.7792 |
| No log | 3.8592 | 274 | 0.6146 | 0.8302 | 0.6146 | 0.7840 |
| No log | 3.8873 | 276 | 0.5999 | 0.8395 | 0.5999 | 0.7745 |
| No log | 3.9155 | 278 | 0.6058 | 0.8395 | 0.6058 | 0.7784 |
| No log | 3.9437 | 280 | 0.6330 | 0.8079 | 0.6330 | 0.7956 |
| No log | 3.9718 | 282 | 0.6399 | 0.8383 | 0.6399 | 0.7999 |
| No log | 4.0 | 284 | 0.6765 | 0.8313 | 0.6765 | 0.8225 |
| No log | 4.0282 | 286 | 0.6807 | 0.8284 | 0.6807 | 0.8250 |
| No log | 4.0563 | 288 | 0.6664 | 0.8105 | 0.6664 | 0.8163 |
| No log | 4.0845 | 290 | 0.6765 | 0.7973 | 0.6765 | 0.8225 |
| No log | 4.1127 | 292 | 0.7003 | 0.7724 | 0.7003 | 0.8368 |
| No log | 4.1408 | 294 | 0.6959 | 0.7778 | 0.6959 | 0.8342 |
| No log | 4.1690 | 296 | 0.6735 | 0.7724 | 0.6735 | 0.8207 |
| No log | 4.1972 | 298 | 0.6604 | 0.7891 | 0.6604 | 0.8127 |
| No log | 4.2254 | 300 | 0.6599 | 0.7568 | 0.6599 | 0.8123 |
| No log | 4.2535 | 302 | 0.6566 | 0.7724 | 0.6566 | 0.8103 |
| No log | 4.2817 | 304 | 0.6749 | 0.7376 | 0.6749 | 0.8215 |
| No log | 4.3099 | 306 | 0.6087 | 0.7692 | 0.6087 | 0.7802 |
| No log | 4.3380 | 308 | 0.5563 | 0.7733 | 0.5563 | 0.7458 |
| No log | 4.3662 | 310 | 0.5621 | 0.7733 | 0.5621 | 0.7497 |
| No log | 4.3944 | 312 | 0.6031 | 0.7838 | 0.6031 | 0.7766 |
| No log | 4.4225 | 314 | 0.6683 | 0.7660 | 0.6683 | 0.8175 |
| No log | 4.4507 | 316 | 0.7202 | 0.7222 | 0.7202 | 0.8486 |
| No log | 4.4789 | 318 | 0.7567 | 0.6993 | 0.7567 | 0.8699 |
| No log | 4.5070 | 320 | 0.6714 | 0.7397 | 0.6714 | 0.8194 |
| No log | 4.5352 | 322 | 0.6167 | 0.7483 | 0.6167 | 0.7853 |
| No log | 4.5634 | 324 | 0.6021 | 0.7413 | 0.6021 | 0.7760 |
| No log | 4.5915 | 326 | 0.6302 | 0.7465 | 0.6302 | 0.7939 |
| No log | 4.6197 | 328 | 0.6671 | 0.7338 | 0.6671 | 0.8168 |
| No log | 4.6479 | 330 | 0.7207 | 0.7338 | 0.7207 | 0.8490 |
| No log | 4.6761 | 332 | 0.7850 | 0.6917 | 0.7850 | 0.8860 |
| No log | 4.7042 | 334 | 0.8459 | 0.6718 | 0.8459 | 0.9197 |
| No log | 4.7324 | 336 | 0.8375 | 0.6917 | 0.8375 | 0.9152 |
| No log | 4.7606 | 338 | 0.8267 | 0.7218 | 0.8267 | 0.9092 |
| No log | 4.7887 | 340 | 0.7516 | 0.7007 | 0.7516 | 0.8669 |
| No log | 4.8169 | 342 | 0.7378 | 0.7246 | 0.7378 | 0.8589 |
| No log | 4.8451 | 344 | 0.7503 | 0.7101 | 0.7503 | 0.8662 |
| No log | 4.8732 | 346 | 0.6754 | 0.7660 | 0.6754 | 0.8218 |
| No log | 4.9014 | 348 | 0.6048 | 0.7917 | 0.6048 | 0.7777 |
| No log | 4.9296 | 350 | 0.5702 | 0.8056 | 0.5702 | 0.7551 |
| No log | 4.9577 | 352 | 0.5725 | 0.7862 | 0.5725 | 0.7566 |
| No log | 4.9859 | 354 | 0.5834 | 0.7639 | 0.5834 | 0.7638 |
| No log | 5.0141 | 356 | 0.6025 | 0.7552 | 0.6025 | 0.7762 |
| No log | 5.0423 | 358 | 0.6113 | 0.7857 | 0.6113 | 0.7818 |
| No log | 5.0704 | 360 | 0.6178 | 0.7015 | 0.6178 | 0.7860 |
| No log | 5.0986 | 362 | 0.5811 | 0.7259 | 0.5811 | 0.7623 |
| No log | 5.1268 | 364 | 0.5376 | 0.7832 | 0.5376 | 0.7332 |
| No log | 5.1549 | 366 | 0.5662 | 0.8027 | 0.5662 | 0.7525 |
| No log | 5.1831 | 368 | 0.5988 | 0.7862 | 0.5988 | 0.7738 |
| No log | 5.2113 | 370 | 0.5996 | 0.7945 | 0.5996 | 0.7743 |
| No log | 5.2394 | 372 | 0.6177 | 0.7692 | 0.6177 | 0.7859 |
| No log | 5.2676 | 374 | 0.7281 | 0.6619 | 0.7281 | 0.8533 |
| No log | 5.2958 | 376 | 0.7164 | 0.7092 | 0.7164 | 0.8464 |
| No log | 5.3239 | 378 | 0.6155 | 0.8026 | 0.6155 | 0.7845 |
| No log | 5.3521 | 380 | 0.5868 | 0.8158 | 0.5868 | 0.7660 |
| No log | 5.3803 | 382 | 0.5842 | 0.8158 | 0.5842 | 0.7644 |
| No log | 5.4085 | 384 | 0.6170 | 0.7778 | 0.6170 | 0.7855 |
| No log | 5.4366 | 386 | 0.6378 | 0.7606 | 0.6378 | 0.7987 |
| No log | 5.4648 | 388 | 0.6164 | 0.7606 | 0.6164 | 0.7851 |
| No log | 5.4930 | 390 | 0.5612 | 0.7945 | 0.5612 | 0.7491 |
| No log | 5.5211 | 392 | 0.5138 | 0.8212 | 0.5138 | 0.7168 |
| No log | 5.5493 | 394 | 0.4860 | 0.8289 | 0.4860 | 0.6972 |
| No log | 5.5775 | 396 | 0.5209 | 0.8133 | 0.5209 | 0.7218 |
| No log | 5.6056 | 398 | 0.5387 | 0.8133 | 0.5387 | 0.7340 |
| No log | 5.6338 | 400 | 0.5625 | 0.8267 | 0.5625 | 0.7500 |
| No log | 5.6620 | 402 | 0.5856 | 0.8 | 0.5856 | 0.7653 |
| No log | 5.6901 | 404 | 0.6196 | 0.8079 | 0.6196 | 0.7871 |
| No log | 5.7183 | 406 | 0.5894 | 0.8133 | 0.5894 | 0.7678 |
| No log | 5.7465 | 408 | 0.5867 | 0.8079 | 0.5867 | 0.7660 |
| No log | 5.7746 | 410 | 0.5585 | 0.8027 | 0.5585 | 0.7474 |
| No log | 5.8028 | 412 | 0.5372 | 0.8 | 0.5372 | 0.7329 |
| No log | 5.8310 | 414 | 0.5506 | 0.8289 | 0.5506 | 0.7420 |
| No log | 5.8592 | 416 | 0.5670 | 0.8289 | 0.5670 | 0.7530 |
| No log | 5.8873 | 418 | 0.5902 | 0.8212 | 0.5902 | 0.7682 |
| No log | 5.9155 | 420 | 0.5727 | 0.8289 | 0.5727 | 0.7568 |
| No log | 5.9437 | 422 | 0.5496 | 0.8462 | 0.5496 | 0.7413 |
| No log | 5.9718 | 424 | 0.5365 | 0.8105 | 0.5365 | 0.7324 |
| No log | 6.0 | 426 | 0.5828 | 0.7550 | 0.5828 | 0.7634 |
| No log | 6.0282 | 428 | 0.5702 | 0.7712 | 0.5702 | 0.7551 |
| No log | 6.0563 | 430 | 0.6117 | 0.7632 | 0.6117 | 0.7821 |
| No log | 6.0845 | 432 | 0.6592 | 0.7785 | 0.6592 | 0.8119 |
| No log | 6.1127 | 434 | 0.6782 | 0.7671 | 0.6782 | 0.8236 |
| No log | 6.1408 | 436 | 0.7048 | 0.7586 | 0.7048 | 0.8395 |
| No log | 6.1690 | 438 | 0.7005 | 0.7671 | 0.7005 | 0.8370 |
| No log | 6.1972 | 440 | 0.6870 | 0.7660 | 0.6870 | 0.8288 |
| No log | 6.2254 | 442 | 0.6029 | 0.7746 | 0.6029 | 0.7765 |
| No log | 6.2535 | 444 | 0.5503 | 0.8 | 0.5503 | 0.7418 |
| No log | 6.2817 | 446 | 0.5206 | 0.8079 | 0.5206 | 0.7215 |
| No log | 6.3099 | 448 | 0.5255 | 0.8026 | 0.5255 | 0.7249 |
| No log | 6.3380 | 450 | 0.5358 | 0.7671 | 0.5358 | 0.7320 |
| No log | 6.3662 | 452 | 0.5886 | 0.7465 | 0.5886 | 0.7672 |
| No log | 6.3944 | 454 | 0.5702 | 0.7639 | 0.5702 | 0.7551 |
| No log | 6.4225 | 456 | 0.5574 | 0.7838 | 0.5574 | 0.7466 |
| No log | 6.4507 | 458 | 0.5652 | 0.8158 | 0.5652 | 0.7518 |
| No log | 6.4789 | 460 | 0.6189 | 0.8176 | 0.6189 | 0.7867 |
| No log | 6.5070 | 462 | 0.6266 | 0.7950 | 0.6266 | 0.7916 |
| No log | 6.5352 | 464 | 0.5642 | 0.8228 | 0.5642 | 0.7511 |
| No log | 6.5634 | 466 | 0.5828 | 0.7815 | 0.5828 | 0.7634 |
| No log | 6.5915 | 468 | 0.5950 | 0.7815 | 0.5950 | 0.7714 |
| No log | 6.6197 | 470 | 0.5732 | 0.8105 | 0.5732 | 0.7571 |
| No log | 6.6479 | 472 | 0.5958 | 0.8182 | 0.5958 | 0.7719 |
| No log | 6.6761 | 474 | 0.6766 | 0.7712 | 0.6766 | 0.8226 |
| No log | 6.7042 | 476 | 0.7478 | 0.7248 | 0.7478 | 0.8648 |
| No log | 6.7324 | 478 | 0.7133 | 0.7222 | 0.7133 | 0.8446 |
| No log | 6.7606 | 480 | 0.6301 | 0.7671 | 0.6301 | 0.7938 |
| No log | 6.7887 | 482 | 0.6069 | 0.7815 | 0.6069 | 0.7790 |
| No log | 6.8169 | 484 | 0.6234 | 0.7785 | 0.6234 | 0.7895 |
| No log | 6.8451 | 486 | 0.5999 | 0.8079 | 0.5999 | 0.7745 |
| No log | 6.8732 | 488 | 0.6161 | 0.8079 | 0.6161 | 0.7849 |
| No log | 6.9014 | 490 | 0.6754 | 0.7432 | 0.6754 | 0.8218 |
| No log | 6.9296 | 492 | 0.6818 | 0.7286 | 0.6818 | 0.8257 |
| No log | 6.9577 | 494 | 0.6599 | 0.7778 | 0.6599 | 0.8123 |
| No log | 6.9859 | 496 | 0.6683 | 0.7518 | 0.6683 | 0.8175 |
| No log | 7.0141 | 498 | 0.6948 | 0.7429 | 0.6948 | 0.8336 |
| 0.4116 | 7.0423 | 500 | 0.6707 | 0.7324 | 0.6707 | 0.8189 |
| 0.4116 | 7.0704 | 502 | 0.6277 | 0.7639 | 0.6277 | 0.7923 |
| 0.4116 | 7.0986 | 504 | 0.6390 | 0.7771 | 0.6390 | 0.7994 |
| 0.4116 | 7.1268 | 506 | 0.7377 | 0.7683 | 0.7377 | 0.8589 |
| 0.4116 | 7.1549 | 508 | 0.6959 | 0.7683 | 0.6959 | 0.8342 |
| 0.4116 | 7.1831 | 510 | 0.6216 | 0.8079 | 0.6216 | 0.7884 |
| 0.4116 | 7.2113 | 512 | 0.6421 | 0.7391 | 0.6421 | 0.8013 |
| 0.4116 | 7.2394 | 514 | 0.6670 | 0.7313 | 0.6670 | 0.8167 |
| 0.4116 | 7.2676 | 516 | 0.6803 | 0.7313 | 0.6803 | 0.8248 |
| 0.4116 | 7.2958 | 518 | 0.6705 | 0.7376 | 0.6705 | 0.8188 |
| 0.4116 | 7.3239 | 520 | 0.6907 | 0.7397 | 0.6907 | 0.8311 |
| 0.4116 | 7.3521 | 522 | 0.7028 | 0.7347 | 0.7028 | 0.8384 |
| 0.4116 | 7.3803 | 524 | 0.6748 | 0.7172 | 0.6748 | 0.8214 |
| 0.4116 | 7.4085 | 526 | 0.6807 | 0.7413 | 0.6807 | 0.8251 |
| 0.4116 | 7.4366 | 528 | 0.6849 | 0.7376 | 0.6849 | 0.8276 |
| 0.4116 | 7.4648 | 530 | 0.6538 | 0.7246 | 0.6538 | 0.8086 |
| 0.4116 | 7.4930 | 532 | 0.6729 | 0.7465 | 0.6729 | 0.8203 |
| 0.4116 | 7.5211 | 534 | 0.7993 | 0.6809 | 0.7993 | 0.8940 |
| 0.4116 | 7.5493 | 536 | 0.7707 | 0.7050 | 0.7707 | 0.8779 |
| 0.4116 | 7.5775 | 538 | 0.6561 | 0.7338 | 0.6561 | 0.8100 |
| 0.4116 | 7.6056 | 540 | 0.5955 | 0.7391 | 0.5955 | 0.7717 |
| 0.4116 | 7.6338 | 542 | 0.6025 | 0.7391 | 0.6025 | 0.7762 |
| 0.4116 | 7.6620 | 544 | 0.5962 | 0.75 | 0.5962 | 0.7722 |
| 0.4116 | 7.6901 | 546 | 0.6210 | 0.7413 | 0.6210 | 0.7880 |
| 0.4116 | 7.7183 | 548 | 0.6392 | 0.7448 | 0.6392 | 0.7995 |
| 0.4116 | 7.7465 | 550 | 0.6935 | 0.7391 | 0.6935 | 0.8328 |
| 0.4116 | 7.7746 | 552 | 0.7884 | 0.7077 | 0.7884 | 0.8879 |
| 0.4116 | 7.8028 | 554 | 0.7965 | 0.7023 | 0.7965 | 0.8925 |
| 0.4116 | 7.8310 | 556 | 0.7496 | 0.6818 | 0.7496 | 0.8658 |
| 0.4116 | 7.8592 | 558 | 0.6939 | 0.6767 | 0.6939 | 0.8330 |
| 0.4116 | 7.8873 | 560 | 0.6748 | 0.7015 | 0.6748 | 0.8214 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
gaudi/opus-mt-gl-en-ctranslate2 | gaudi | "2024-10-18T22:09:43Z" | 10 | 0 | transformers | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | translation | "2024-07-17T00:10:21Z" | ---
tags:
- ctranslate2
- translation
license: apache-2.0
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-gl-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-gl-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-gl-en --output_dir ./ctranslate2/opus-mt-gl-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-gl-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-gl-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-gl-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-gl-en) by Helsinki-NLP.
|
huggingtweets/timcast | huggingtweets | "2021-07-23T17:03:22Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/timcast/1627059798876/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1290434690487218176/DNmKXZQ6_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tim Pool</div>
<div style="text-align: center; font-size: 14px;">@timcast</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tim Pool.
| Data | Tim Pool |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 204 |
| Short tweets | 324 |
| Tweets kept | 2719 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3m867fab/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @timcast's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/efdcgdgn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/efdcgdgn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/timcast')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
user-agent/BERT-taxonomy-text | user-agent | "2024-05-31T13:39:11Z" | 160 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"pytorch",
"multimodal",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-30T19:34:27Z" | ---
tags:
- text-classification
- pytorch
- multimodal
metrics:
- f1_score
model-index:
- name: BERT-taxonomy-text
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Weighted F1 Score
type: f1_score
value: 0.88
pipeline_tag: text-classification
---
# Model Title
Description here...
## Usage
Usage instructions here... |
andreac94/finetuning-sentiment-model-amazonbaby5000 | andreac94 | "2023-06-19T01:35:03Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-06-19T01:03:38Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-sentiment-model-amazonbaby5000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-amazonbaby5000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2733
- Accuracy: 0.9024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mradermacher/GermanEduScorer-Qwen2-1.5b-GGUF | mradermacher | "2025-03-13T17:54:13Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"de",
"base_model:pL-Community/GermanEduScorer-Qwen2-1.5b",
"base_model:quantized:pL-Community/GermanEduScorer-Qwen2-1.5b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-13T17:48:22Z" | ---
base_model: pL-Community/GermanEduScorer-Qwen2-1.5b
language:
- de
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/pL-Community/GermanEduScorer-Qwen2-1.5b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GermanEduScorer-Qwen2-1.5b-GGUF/resolve/main/GermanEduScorer-Qwen2-1.5b.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/GermanEduScorer-Qwen2-1.5b-GGUF/resolve/main/GermanEduScorer-Qwen2-1.5b.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/GermanEduScorer-Qwen2-1.5b-GGUF/resolve/main/GermanEduScorer-Qwen2-1.5b.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GermanEduScorer-Qwen2-1.5b-GGUF/resolve/main/GermanEduScorer-Qwen2-1.5b.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/GermanEduScorer-Qwen2-1.5b-GGUF/resolve/main/GermanEduScorer-Qwen2-1.5b.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/GermanEduScorer-Qwen2-1.5b-GGUF/resolve/main/GermanEduScorer-Qwen2-1.5b.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GermanEduScorer-Qwen2-1.5b-GGUF/resolve/main/GermanEduScorer-Qwen2-1.5b.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GermanEduScorer-Qwen2-1.5b-GGUF/resolve/main/GermanEduScorer-Qwen2-1.5b.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/GermanEduScorer-Qwen2-1.5b-GGUF/resolve/main/GermanEduScorer-Qwen2-1.5b.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/GermanEduScorer-Qwen2-1.5b-GGUF/resolve/main/GermanEduScorer-Qwen2-1.5b.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GermanEduScorer-Qwen2-1.5b-GGUF/resolve/main/GermanEduScorer-Qwen2-1.5b.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/GermanEduScorer-Qwen2-1.5b-GGUF/resolve/main/GermanEduScorer-Qwen2-1.5b.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ClarenceDan/ec7449d2-171b-4638-8a72-45ac04cfd069 | ClarenceDan | "2025-03-09T12:16:26Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama-chat",
"base_model:adapter:unsloth/tinyllama-chat",
"license:apache-2.0",
"region:us"
] | null | "2025-03-09T12:04:25Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ec7449d2-171b-4638-8a72-45ac04cfd069
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/tinyllama-chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 96c7cb877af8f653_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/96c7cb877af8f653_train_data.json
type:
field_input: plan
field_instruction: goal
field_output: revision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/ec7449d2-171b-4638-8a72-45ac04cfd069
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/96c7cb877af8f653_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ef008972-2079-4b14-830a-53e13b141355
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ef008972-2079-4b14-830a-53e13b141355
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ec7449d2-171b-4638-8a72-45ac04cfd069
This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 3 | nan |
| 0.0 | 0.0008 | 6 | nan |
| 0.0 | 0.0013 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gespitia1/q-FrozenLake-v1-4x4-noSlippery | gespitia1 | "2024-04-04T17:20:12Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-04-04T17:20:09Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="gespitia1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DoppelReflEx/MN-12B-Kakigori | DoppelReflEx | "2025-02-26T06:36:30Z" | 74 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:cgato/Nemo-12b-Humanize-KTO-Experimental-Latest",
"base_model:merge:cgato/Nemo-12b-Humanize-KTO-Experimental-Latest",
"base_model:crestf411/MN-Slush",
"base_model:merge:crestf411/MN-Slush",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-29T03:54:10Z" | ---
license: cc-by-nc-4.0
base_model:
- cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
- crestf411/MN-Slush
library_name: transformers
tags:
- mergekit
- merge
---
# What is this?
Simple merge, I can say it's good enough to play RP, ERP, but decent.
Eval scores better than [WolfFrame](https://huggingface.co/DoppelReflEx/MN-12B-WolFrame), but I can't tell how good is it.
Overall, very nice-to-try model. 😁
GGUF here, https://huggingface.co/mradermacher/MN-12B-Kakigori-GGUF
Imatrix here, https://huggingface.co/mradermacher/MN-12B-Kakigori-i1-GGUF
My own Q6_K: https://huggingface.co/DoppelReflEx/MN-12B-Kakigori-Q6_K-GGUF
<details>
<summary>Merge Detail</summary>
<p>
### Models Merged
The following models were included in the merge:
* [cgato/Nemo-12b-Humanize-KTO-Experimental-Latest](https://huggingface.co/cgato/Nemo-12b-Humanize-KTO-Experimental-Latest)
* [crestf411/MN-Slush](https://huggingface.co/crestf411/MN-Slush)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
- model: crestf411/MN-Slush
merge_method: slerp
base_model: crestf411/MN-Slush
parameters:
t: [0, 0.1, 0.2, 0.25, 0.25, 0.2, 0.1, 0]
dtype: bfloat16
tokenizer_source: base
```
</p>
</details>
|
imsumit18/Zephyr-2-7b-insurance-data-chatbot | imsumit18 | "2024-05-15T11:45:54Z" | 7 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
] | null | "2024-05-14T09:23:42Z" | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/zephyr-7B-alpha-GPTQ
model-index:
- name: Zephyr-2-7b-insurance-data-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Zephyr-2-7b-insurance-data-chatbot
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.15.2 |
haeun161/lora-midm-7b-nsmc | haeun161 | "2023-12-12T06:07:13Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:jangmin/midm-7b-safetensors-only",
"base_model:adapter:jangmin/midm-7b-safetensors-only",
"region:us"
] | null | "2023-12-11T04:08:27Z" | ---
library_name: peft
base_model: jangmin/midm-7b-safetensors-only
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
- mdim 모델을 한국어 영화 리뷰 데이터셋(NSMC)을 해결하는 모델이 되도록 미세튜닝
- 한국 영화 리뷰의 긍정 또는 부정을 판단하는 모델을 학습
## Model Details
모델: KT-AI/midm-bitext-S-7B-inst-v1
학습 데이터: NSMC (네이버 영화 리뷰 데이터셋)
batch 크기: 1
시퀀스 길이: 384
학습률: 1e-4
epoch: 1
## 정확도 향상 추가 노력
- epoch 300부터 시작하여 1000까지 학습
- 데이터 개수 2000부터 시작하여 3000개까지 학습
## 평가
- (학습 데이터) nsmc 상위 3000개
- (검증 데이터) nsmc 상위 1000개
- 학습 결과:
TrainOutput(
global_step=1000,
training_loss=0.9650133666992188,
metrics={'train_runtime': 2982.9519,
'train_samples_per_second': 0.67,
'train_steps_per_second': 0.335,
'total_flos': 3.1051694997504e+16,
'train_loss': 0.9650133666992188,
'epoch': 0.67}
)
- 정확도 테스트:
TP TN
PP 477 79
PN 31 413
정확도: 0.89
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0 |
Xenova/e5-large | Xenova | "2024-10-08T13:39:22Z" | 19 | 0 | transformers.js | [
"transformers.js",
"onnx",
"bert",
"feature-extraction",
"base_model:intfloat/e5-large",
"base_model:quantized:intfloat/e5-large",
"region:us"
] | feature-extraction | "2023-06-26T13:45:04Z" | ---
base_model: intfloat/e5-large
library_name: transformers.js
---
https://huggingface.co/intfloat/e5-large with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
satriadega/alpha_model | satriadega | "2024-11-25T20:44:08Z" | 61 | 0 | transformers | [
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | "2024-11-25T08:33:47Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DivyaMathi/ppo-SoccerTwos | DivyaMathi | "2024-03-18T13:32:28Z" | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2024-03-18T13:32:24Z" | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: DivyaMathi/ppo-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Unmand/en_procare_referrer_organisation | Unmand | "2023-08-25T04:16:32Z" | 0 | 0 | spacy | [
"spacy",
"text-classification",
"en",
"region:us"
] | text-classification | "2023-08-25T04:03:16Z" | ---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_procare_referrer_organisation
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_procare_referrer_organisation` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.4,<3.6.0` |
| **Default Pipeline** | `textcat_multilabel` |
| **Components** | `textcat_multilabel` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (726 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat_multilabel`** | `H D PROJECTS PTY LTD`, `McCabe Curwood`, `Dept of Education, Skills & Employment`, `StateCover Mutual Limited`, `Perth Orthopaedic & Sports Medicine`, `Queensland Child Care Service Pty Ltd Ttee`, `Allianz Australia Insurance Limited c/- Jensen McConaghy Lawyers`, `Catholic Care Diocese of Broken Bay`, `Helping Hand New Aged Care`, `Suncorp Life`, `Qantas Airways Limited`, `Department of Defence`, `Master Builders Association of SA`, `HWL Ebsworth Lawyers`, `Alexander Watson`, `Zoetis`, `RSL Care`, `P&N Bank`, `University of NSW`, `Uber Technologies, Inc.`, `Finlay Plumbing Services Pty Ltd`, `Hays Specialist Recruitment`, `KENNARDS HIRE PTY LIMITED`, `Carer Solutions Australia`, `Unitingcare`, `No. 1 Riverside Quay Proprietary Limited`, `Gallagher Basset`, `Department of the Chief MInister and Cabinet`, `CHEP Australia`, `Minda Incorporated`, `The Star`, `Tas Water`, `Feros Care`, `Roshana Group`, `Atradius Crédito y Caución S.A de Seguros y Reaseguros`, `Services Australia`, `RT Consulting`, `The Australian Electoral Commission`, `Federal Court of Australia`, `NRMA INSURANCE`, `Catholic Education Office`, `Svitzer Australia Pty Ltd`, `QBE acting as the agent of NSW Self Insurance Corporation`, `LAWRENCE & HANSON`, `UnitingCare Queensland`, `LibertyGFG`, `Australian Tax Office`, `Alvaro Transport Pty Ltd`, `GIO Workers Compensation ACT`, `Cso Diocese Of Broken Bay`, `Glencore`, `EASTERN HOSPITAL`, `BOC Limited, a member of the Linde Group`, `INVOCARE AUSTRALIA PTY LIMITED`, `UNITRANS ASIA PACIFIC PTY LTD`, `Services Australia (Dept of Human Services)`, `VEOLIA ENVIRONMENTAL SERVICES (AUSTRALIA) PTY LTD `, `Vickilynn Pty Ltd`, `Coles Team Cover`, `MLC Life Insurance`, `Sparke Helmore Lawyers`, `RSL Lifecare Limited`, `QBE Workers Compensation TAS`, `Kimberley Clark Australia`, `The Personnel Group Ltd`, `Insurance Australia Group`, `Canberra Sand & Gravel`, `Viva Energy Australia Pty Ltd`, `Moran Aged Care Engadine`, `Australian Taxation Office`, `Youis Group Pty Ltd`, `Cleanaway`, `Mosaic Brands (Rockmans)`, `Children Hospital Foundation`, `Civil Aviation Safety Authority`, `QBE Workers Compensation WA`, `United Protestant Association`, `PSC Capital Insurance Brokers`, `Woolworths Group Limited`, `Kilcoy Global Foods`, `American Express Australia Limited`, `Palios Meegan Nicholson`, `Uniting`, `Coles Group Supply Chain Pty Ltd`, `QBE`, `OBE Organic`, `Cyprium Metals Limited`, `Kincare Health Services Pty Ltd`, `StateCover Mutual Ltd`, `FIRE RESCUE VICTORIA`, `N2N Claims Solutions`, `WesFarmers – Group TeamCover`, `NDIS Quality and Safeguards Commission`, `HD Projects Pty Ltd`, `St Finn Barr's Catholic Primary School - Lanceston`, `Power and Water Corporation`, `EML VIC Pty Ltd`, `Wanton Kearney`, `Kmart Australia Ltd`, `Territory Families – Housing & Communities`, `Calvary Community Care`, `Sedgwick`, `Leonora Contracting P/L`, `NSW Health Pathology`, `Kilcoy Pastoral Company Ltd`, `GIO CTP ACT`, `DXC Claims Management Services - VIC`, `Schindler Lifts Australia Pty Ltd`, `Meridian Lawyers`, `GIO Workers Compensation WA`, `AUB Group Limited`, `Coateshire`, `Aurizon`, `JWLand`, `Trusted Support Coordination`, `Gosford Quarries Pty Ltd`, `GIO NSW Workers Compensation`, `DESE`, `Busways Group`, `Gallagher Bassett Workers Compensation NSW`, `Allianz Australia Insurance Limited C/- McInnes Wilson Lawyers`, `oOh!Media`, `West Gate Tunnel Project`, `KOMATSU MARKETING SUPPORT AUST`, `Mills Oakley Lawyers`, `Hall & Wilcox`, `Skybridge Group Pty Limited`, `Retirement Living Business & Financial Services`, `Allianz Workers Compensation NT`, `Environmental Industries Pty Ltd`, `EML Workers Insurance NSW`, `Department of Agriculture, Water and the Environment`, `MS Australia`, `CSIRO`, `Orange Health Service`, `AHI Insurance`, `Bupa`, `Allianz Australia Workers Compensation (Victoria) Ltd`, `Cappello Civil Contracting Services Pty Ltd`, `LAF Group`, `RTozerconsulting`, `St Michaels College`, `Gallagher Bassett for Opal Healthcare`, `Department of Families, Fairness and Housing`, `WESTHAVEN LIMITED`, `Integrity Care`, `GPC Asia Pacific`, `Department of Primary Industries`, `Mosaic Brands Limited`, `QBE Workers Compensation NT`, `Coredev`, `South Western Sydney Local Health District`, `CGU Workers Compensation ACT`, `Tas Prison Service`, `Sonic Healthcare`, `Workcover C/BT Lawyers`, `PSC WCS`, `CPB Contractors Pty Ltd`, `Cookie Steelfixing and Construction`, `Warner Bros`, `CGU Workers Compensation NT`, `CMET`, `AnglicareSA`, `St Vincent’s Care Services Carseldine`, `Tasmanian Catholic Education Office`, `Allianz Australia Insurance Ltd`, `Roussos Legal Advisory`, `BGIS Technical Services`, `AAMI NSW CTP`, `Wotton Kearney`, `Galllgher Bassett Workers Compensation VIC`, `Brisbane Fire Pty Ltd`, `QBE Workers Compensation NSW`, `Sunshine Coast Hospital and Health Service`, `Oaks Hotels & Resorts Limited - 9004`, `Ausgrid`, `Boral Limited`, `Aerison Pty Ltd`, `Cooper Grace Ward Lawyers`, `Hsswa Pty Ltd`, `Weir Minerals Australia Ltd`, `Labour Force Pty Ltd`, `Barry Nilsson Lawyers`, `Liberty Oil Australia Pty Ltd`, `ABPhillips`, `Austral Risk`, `AAI Limited trading as GIO - Agent for the Workers Compensation Nominal Insurer`, `OCEAN GARDENS INC`, `Roshana Group Pty Ltd`, `GIO CTP NSW`, `Lachlan Shire Council`, `Allianz Workers Compensation WA`, `United Equipment Pty Ltd`, `PFD FOOD SERVICES PTY LTD`, `Phoenix Insurance Brokers`, `Blumers`, `Department of Home Affairs`, `Anglo Coal (Grosvenor Management) Pty Ltd c/- Ashurst Australia`, `Anglicare Southern QLD`, `Lifetime Support`, `The Trustee for The Roshana Family Trust`, `Zurich Australian Insurance Ltd`, `Dept of Education & Training - School Cleaners`, `DXC Claims Management Services`, `The Medical Clinic Millicent`, `Melbourne Water`, `COMPASS GROUP AUSTRALIA PTY LTD`, `Andreasens Green NSW Andreasens Green QLD`, `Astridge and Murray`, `EML Plus`, `Philips Electronics P/L`, `ISS Facility Services Australia Ltd`, `Busy Bees Early Learning Australia Pty Ltd`, `Coates Hire`, `Sydney Trains`, `Catholic Schools Parramatta Diocese Limited`, `CGU Workers Compensation TAS`, `Mercer`, `COFFS HARBOUR SUPPORT SERVICES LTD`, `I-MED GROUP`, `One Path`, `Transport Accident Commission`, `Department of Corporate and Digital Development Northern Territory Government`, `Boral Insurance Pty Limited`, `Department of Justice`, `AB Phillips Pty Ltd`, `Irwin & Hartshorn`, `Pacific Labour Facility`, `Suncorp Staff Pty Ltd`, `Vilis Bakery`, `NRMA`, `The Hospitals Contribution Fund Of Australia Ltd`, `SCE Group`, `Our Lady of Mercy College Parramatta`, `DOSER Freight Forwarding`, `Employers Mutual NSW Limited`, `Cappello Hydraulics & Civil Pty Ltd`, `Buderim Kindergarten`, `ACT Recycling Pty Ltd`, `Bupa Medical Visa Services`, `Allianz CTP SA`, `Auspost`, `Liverpool Plains Shire Council`, `Corporate Services Network Pty Ltd`, `DP World Australia Pty Ltd`, `Complete Personnel Recruitment`, `DXC Integrated Services`, `QBE Workers Compensation - ACT`, `BINGO PTY LTD`, `The Arnott’s Group`, `EML Agent for icare Workers Insurance`, `IHG Irwin Hartshorn Group`, `Civilmart`, `ORAMS Agencies`, `Liberty GFG`, `QBE NSW Treasury Managed Fund`, `EML (NSW Treasury Managed Fund)`, `Hays Recruitment`, `Mosaic Group Ltd Pty`, `BlueCare`, `Gallagher Bassett Services`, `Ernst & Young (EY)`, `Cootharinga North Queensland`, `BUPA AGED CARE AUSTRALIA P/L`, `Toll Self Insurance`, `Corporate Services Network`, `ACT GOV`, `SA Health Northern Adelaide Local Health Network`, `Inghams Enterprises Pty Ltd`, `Centrewest Insurance Brokers`, `Department of Foreign Affairs and Trade (DFAT)`, `RSL Life Care`, `Star of the Sea School`, `Chubb`, `Suncorp CTP QLD`, `JACANA ENERGY`, `Toll Group`, `Corporeal Health`, `Mosaic Brands (Noni B Limited)`, `QBE CTP Insurance`, `Q Super`, `Bartier Perry Lawyers`, `Queensland Government`, `Department of Health and Human Services Tasmania`, `Hall and Wilcox Lawyers`, `Griffin Coal`, `Cappello Commercial Hydraulics and Civil Pty Ltd`, `Bolton Clarke`, `Australian Unity`, `Gallagher Bassett Services Pty Ltd`, `St John Ambulance Western Australia Ltd`, `Geocon Group Pty Ltd`, `Allianz Australia Insurance Limited c/ Jensen McConaghy Lawyers`, `UAA Pty Ltd`, `Tamex Transport Services Pty Ltd`, `WFI Insurance Limited`, `Programmed Skilled Workforce Limited`, `Bartier Perry`, `Australian Competition & Consumer Commission`, `Queensland Health`, `Holcim (Australia) Pty Ltd`, `Southern NSW Local Health District`, `Blue Care`, `Gallagher Bassett Workers Compensation VIC`, `Point Insurance`, `Workers Compensation & Risk Specialists (WCRS) services render for Philips electronics P/L`, `Country Wide Insurance Brokers (CWIB)`, `Allianz Australia Insurance Ltd C/ - Moray and Agnew Lawyers`, `CHUBB AUSTRALASIA`, `Sirius Support & Industrious People`, `BORG MANUFACTURING P/L`, `Department of Climate Change, Energy, the Environment and Water`, `Hireup Pty. Ltd.`, `Workcover QLD`, `Greenham Tasmania `, `Fantastic Furniture Ltd`, `CGU Workers Compensation VIC`, `Lawson Risk Management Services Pty Ltd`, `SGP Civil`, `Moray & Agnew`, `Edwards Michael Lawyers`, `Jensen McConarchy`, `Cyprium Metals`, `Hunter New England Local Health District`, `EML TMF, Insurance for NSW`, `RACQ Insurance`, `Blue Care ATF The Uniting Church in Aust. Property Trust (Q)`, `ENERGYAUSTRALIA SERVICES P/L`, `AAMI CTP`, `Bupa Asia Pacific`, `The Good Shepherd Home`, `Department of Corporate and Digital Development`, `Allianz CTP Claims NSW`, `Sedgwick Australia`, `Racing NSW`, `GCI Group`, `Australia Post`, `Coles Group Limited`, `Minter Ellison`, `MCCOLL'S OPERATIONS P/L`, `Apprenticeship Support Australia`, `AIA Australia Limited`, `Ernst & Young Services Pty Limited`, `North Metropolitan Health Service`, `St Vincent de Paul Society Canberra/Goulburn (Inc)`, `DP WORLD AUSTRALIA FREMANTLE TERMINAL`, `Moray and Agnew`, `Mosaic Group`, `Ovato`, `ACT Formwork Pty Ltd`, `DORMAKABA AUSTRALIA PTY LTD`, `Jones Harley Toole`, `QBE Accident and Health`, `Crawford Legal`, `REA Group Ltd`, `Amadeus IT Pacific Pty Ltd`, `DXC Integrated Services Victoria Pty Ltd`, `Vellex Pty Ltd`, `3M Australia`, `RTC Consulting`, `Somerset College Ltd`, `Bupa Care Services`, `IKEA North Lakes`, `Australian Criminal Intelligence Commission`, `McInnes Wilson Lawyers`, `UnitingCare Queensland `, `Anglican Community Care Incorporated (trading as ac.care)`, `Electrolux Home Products Pty Ltd`, `Gen Leads`, `FUSE RECRUITMENT MELBOURNE P/L`, `Zurich Financial Services Australia Limited`, `Wesfarmers Group TeamCover`, `Connect Infrastructure`, `Oji Fibre Solutions (Aus) Pty Ltd`, `Quality Bakers Australia Pty Limited`, `Workers Compensation & Risk Specialists`, `Civil Aviation Safety Authority (CASA)`, `Endeavour Foundation`, `The Territory Boundless Possible`, `Territory Families – Housing & Communities`, `Ampol Australia Petroleum Pty Ltd`, `Seven Network (Operations) Ltd`, `HopgoodGanim Lawyers`, `Coal Mines Insurance`, `QBE Insurance Australia`, `UGL Limited`, `QBE Accident and Health `, `C.INC`, `Ikea Logan`, `VERO`, `Geodis Australia`, `McCabes Lawyers`, `Programmed`, `UNSW Canberra`, `EML, Agent for ReturnToWorkSA`, `TEST ORG 2. EML Workers Insurance NSW`, `Kings Group`, `Maney Transport`, `South Western Sydney Lhd`, `Force Fire and Safety Pty Ltd`, `Astridge & Murray Solicitors `, `Rankin Ellison Lawyers`, `EML Insurance`, `ACCC/AER`, `Facilities First`, `Turks Legal`, `Jenson McConaghy Lawyers`, `CGU Insurance`, `AAI Limited trading as GIO`, `BP Australia Limited C/ Collin Biggers & Paisley Lawyers`, `O’Neill & Brown Electrical Services Pty Ltd`, `St Kilda PCYC`, `Justice Services Pty Ltd`, `American Express International Inc`, `Gillis Delaney Lawyers`, `Cabra Dominican College Ltd.`, `Trident Services Cleaning Pty Ltd`, `Hicksons Lawyers`, `Healthscope Operations Pty Ltd`, `GSK CX Healthcare Pty Ltd`, `ACT Government`, `AJ Bush & Sons Pty Ltd`, `OMB Solicitors`, `EML Self Insurance`, `Cooper Grace Ward`, `GC Legal`, `Centacare Catholic Family Services`, `Etex Australia Pty Ltd`, `Allianz Australia Ltd`, `Envirolab Service`, `Ikea `, `Allianz Australia Insurance Limited`, `WorkCover Queensland`, `Allianz Workers Compensation ACT`, `GIO Workers Compensation NSW`, `GenesisCare`, `Rocklea Pressed Metal Pty Ltd `, `Australian Digital Health Agency`, `HWL Ebsworth`, `Museum and Art Gallery Northern Territory (MAGNT)`, `CSR`, `Connell`, `4cRisk`, `HBA Legal`, `Coles Supermarkets Australia Pty Ltd`, `The University of Queensland`, `VENTIA SERVICES GROUP P/L,VENT`, `Point Underwriting Agency Pty Ltd`, `Youi CTP SA`, `Allianz Workers Compensation NSW`, `Detmold Packaging Pty Ltd`, `KENNARDS HIRE PTY LTD`, `QBE CTP QLD`, `Insurance House Group`, `Kilcoy Pastoral Company Limited`, `SRG Global Mining (Australia) Pty Ltd`, `Hunter Imaging Group`, `Park Hyatt Melbourne`, `Enviro Lab`, `QBE Australia Insurance Limited`, `EML c/o Moray`, `Catholic Church Insurance Limited`, `NV EMPLOYMENT PTY LTD`, `IP Australia`, `Qantas`, `Wesfarmer Limited`, `Melton City Council`, `Workcover Employer For Special Policies`, `Allianz Australia Workers Compensation (NSW) Ltd.`, `Uniting Care Health`, `Staff Australia Payroll Services Pty Ltd`, `WN Group`, `Infrabuild`, `Western NSW Local Health District`, `APS Group`, `DXC Claims Management Services - VIC`, `GIO`, `Northern Adelaide Local Health Network `, `Austbrokers Canberra`, `Department of Treasury and Finance Northern Territory Government`, `PSC Workers Compensation & Consulting`, `Alinta Energy`, `Sunline ACT Pty Ltd`, `Allianz Australia Workers' Compensation (Victoria)`, `Suncorp`, `JW Land Construction`, `Comcare - VIC`, `IKEA Pty Limited`, `KENNARDS HIRE`, `IRI Worldwide`, `RFI Technology Solutions`, `Engage TSS Internal Resources`, `St Vincent’s Care Services Mitchelton`, `Cappello Concreting Services Pty Ltd`, `Correct Care Australasia P/L`, `Coal Services`, `VELLA TRANSPORT ADMINISTRATION PTY LTD`, `CGU Workers Compensation WA`, `CORPORATE SERVICE NETWORK`, `BGIS`, `SCENTRE LIMITED`, `Employers Mutual Limited`, `RAPE & DOMESTIC VIOLENCE SERVICES AUSTRALIA`, `PSC Insurance`, `Allianz Australia Insurance Ltd ACT`, `Big W`, `Coverforce Pty Ltd`, `AAMI SA CTP Claims`, `EML Workers Insurance`, `Emjay Insurance Brokers`, `EML Victoria`, `WorkSafe Claims and Recovery Support team`, `Adcor`, `Territory Families, Housing and Communities (TFHC)`, `Nazareth Catholic Community`, `Gallagher Bassett Workers Compensation SA`, `INVOCARE AUSTRALIA P/L`, `Hardman Risk Management`, `The Sydney Childrens Hospital Network`, `The Junction Works Limited`, `PEM DEMO`, `Queensland Ambulance Service`, `Fel Child Care Centres 1 Pty Ltd`, `Allianz CTP QLD`, `Moray & Agnew Lawyers`, `Programmed Maintenance Services Ltd (Self Insured)`, `iag`, `Barnardos`, `eReports `, `Youi Pty Ltd`, `HM Focus Pty Ltd`, `Allianz Workers Compensation VIC`, `iCare Workers Insurance`, `Procare Group`, `Kemp & Co Lawyers`, `AAMI Insurance`, `Combined Insurance`, `STAWELL GOLD MINES P/L`, `QBE CTP NSW`, `SA Health`, `Gilshenan & Luton Legal Practice`, `Genesis Care`, `SOUTH AUSTRALIA POLICE`, `Wollongong City Council`, `TUTT BRYANT GROUP LTD`, `Endeavour Energy`, `Tasmanian Health Service`, `IC Formwork Services Pty Ltd`, `Humdrum`, `Comcare`, `The Gowrie (Qld) Inc`, `Australian Government Department of Education, Skills and Employment`, `Gair Legal`, `Dept of Territory Families, Housing and Communities`, `McArthur River Mining PTY Ltd`, `Kincare Management Pty Ltd`, `CFA`, `Department of Territory Families, Housing and Communities Division Library & Archives NT`, `Department for Education and Child Development`, `Core Building Group Pty Ltd`, `ACH Group`, `Busy Bees Australia Operations Pty Ltd.`, `Wesfarmers Ltd`, `JBC Corporate`, `NULL`, `No Employer - ADL`, `BT Lawyers`, `InfraBuild Steel Centre`, `Kimberly-Clark`, `Tas TAFE`, `EML National Self Insurance`, `National Disability Insurance Agency`, `Colin Biggers & Paisley Pty`, `DP World Brisbane Pty Ltd`, `Australian Trade and Investment Commission (Austrade)`, `Allianz Australia Limited c/- McInnes Wilson Lawyers`, `Community Solutions`, `RFI`, `RACQ Insurance Limited ABN 50 009 704 152`, `AAI Limited trading as GIO`, `Gallagher Bassett Services Workers Compensation Vic Pty Ltd`, `Department of Infrastructure, Transport and Regional Development`, `PSC Insurance Group`, `Allianz CTP NSW`, `CSR Limited`, `Kimberly-Clark Australia P/L`, `Hall and Willcox Lawyers`, `Page Seager Lawyers`, `Iconic Hotels Management`, `St John Medical Centre`, `Department of Veterans Affairs`, `Allianz QLD CTP`, `Morgan & Agnew Lawyers`, `Bureau of Meteorology`, `Forest Coach Lines Pty / Ltd`, `Shaw's Darwin Transport Pty Ltd`, `Dynamic Diesel Mechanical Services Pty Ltd`, `Hall & Wilcox Lawyers`, `Moran Aged Care`, `[email protected]`, `Gallagher Bassett Self Insurance NSW`, `EML as agent for icare Workers Insurance NSW`, `Minter Ellison Lawyers`, `Lee Legal Group`, `Child and Adolescent Health Service (CAHS)`, `Holman Webb Lawyers`, `Dept of Home Affairs`, `QSuper`, `TIO Motor Accidents Compensation `, `Allianz Australia Workers' Compensation (Victoria) Limited`, `Perpetual Limited`, `Barwang Pty Ltd`, `CTP QLD Claims Division`, `InvoCare`, `Australian Border Force`, `I MED Radiology Network`, `Ensure Pty Ltd`, `CITY OF PALMERSTON`, `AKUBRA HATS PTY LTD`, `Secom Australia`, `GIO Workers Compensation NT`, `Pialligo Estate`, `Berry Buddle Wilkins`, `Department of Infrastructure, Transport, Regional Development and Communications`, `Aussie Skip Bins Services P/L`, `BGIS Pty Ltd`, `NSW Police Force`, `GIO Workers Compensation TAS`, `Eighteen33 Pty Ltd`, `Crown Law`, `Paramatta Council`, `Northern Territory Government`, `Australian Electoral Commission`, `Department of Health`, `Hunt & Hunt Lawyers`, `Batemans Bay Soldiers Club`, `Allianz Workers Compensation Tasmania`, `SMK Lawyers`, `Envirolab Group`, `WorkSafe Victoria`, `Allianz Australia Insurance Limited, c/- Moray & Agnew`, `Allianz Australia Insurance Limited ABN 15 000 122 850, c/- Moray & Agnew`, `City of Parramatta`, `UES International Pty Ltd`, `Westpac Group`, `Logistics & Stores (Mailroom, Stores & Transport) Services CHW`, `Device Technologies Australia Pty Ltd`, `Willis Towers Watson`, `Hsswa Pty Ltd & HSS Resources Pty Ltd & Other`, `Kingspan Water & Energy Pty Limited`, `SAPOL`, `Guild Insurance`, `Westpac Banking Group`, `St Hilarion Aged Care`, `AAI Limited trading as GIO - Agent for the Workers Compensation Nominal Insurer ABN 83 564 379 108`, `Roshana Pty Ltd`, `QBE Insurance (Australia) Limited (ABN 78003191035)`, `Service Australia`, `BOC Limited `, `HWLE Lawyers`, `NRMA CTP NSW`, `RACQ Insurance Limited ABN 50009704152/ C- Cooper Grace Ward`, `CALVARY ADMINISTRATION PTY LTD`, `Cappello Group`, `Wesfarmers Limited`, `GIO NSW CTP `, `FK Gardner Services (Qld) Pty Ltd`, `Challenge Implements Holdings`, `Bartier Perry Pty Limited`, `Chubb Insurance Australia Limited`, `EMP Michael Lawyers`, `I-MED RADIOLOGY NETWORK LIMITED`, `Gilchrist Connell Legal`, `Premier Office Relocations`, `Nominal Defendant c/- Jensen McConaghy Lawyers`, `Detmold Mental Health Training`, `EML`, `Premise`, `Balance Rehab`, `Xchanging Workers Compensation - NSW`, `Coogee Chemicals Pty Ltd`, `Safe Work Australia`, `Jensen McConaghy Lawyers`, `Hawkesbury City Council`, `Toll Global Express`, `The Corporation of the Synod of the Diocese of Brisbane`, `NRMA CTP SA`, `Ambulance Victoria`, `APSystems`, `Austbrokers (Finsura)`, `SCENTRE GROUP`, `Ikea Australia`, `Department of Treasury and Finance`, `Gallagher Bassett Services Workers Compensation NSW`, `NONI B HOLDINGS PTY LIMITED`, `QBE Workers Compensation SA`, `The Star Entertainment Group Self Insurance Unit`, `Catholic Care Diocese of Bathurst`, `GAIR LEGAL PTY LIMITED`, `QBE CTP SA`, `Wesfarmers Group`, `Rod Pilon Transport`, `TG Legal`, `Department of the Prime Minister and Cabinet`, `UNSW`, `RACQ Group`, `REMONDIS Australia Pty Ltd`, `Australian Federal Police`, `Marshall & Brougham Constructions `, `Chandler Macleod Group`, `University of Tasmania`, `Goodman Fielder Pty Limited`, `SONIC HEALTHCARE GROUP`, `Hastings Medical Centre`, `Hospitality Employers Mutual`, `HCF`, `Colin Biggers Paisley Lawyers`, `Department Veterans Affairs`, `Maddocks Lawyers`, `SRG Group`, `Australian Personnel Solutions (APS Group)`, `EY Business Solutions Pty Ltd`, `National Indigenous Australians Agency`, `St Catherine's School, Berwick`, `Transport for NSW`, `South Australian Native Titles Services` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 32.28 |
| `CATS_MICRO_P` | 71.89 |
| `CATS_MICRO_R` | 23.49 |
| `CATS_MICRO_F` | 35.41 |
| `CATS_MACRO_P` | 7.06 |
| `CATS_MACRO_R` | 3.40 |
| `CATS_MACRO_F` | 4.32 |
| `CATS_MACRO_AUC` | 32.28 |
| `TEXTCAT_MULTILABEL_LOSS` | 7.88 | |
wongctroman/fine-tuned-cloudy-sentence-transformer-9 | wongctroman | "2024-03-11T04:13:52Z" | 49 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-03-11T04:12:08Z" | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# wongctroman/fine-tuned-cloudy-sentence-transformer-9
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-9')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-9)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 18 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 500,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Lily-Phillips-Leaked-Video-Link/Lily.Phillips.Leaked.Viral.Video.Link | Lily-Phillips-Leaked-Video-Link | "2025-03-21T10:09:57Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-21T10:09:37Z" | ►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► [𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️](https://lasun.site/?viralvideoleaked)
►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► [𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤](https://lasun.site/?viralvideoleaked)
<animated-image data-catalyst=""><a href="https://lasun.site/?viralvideoleaked" rel="nofollow" data-target="animated-image.originalLink"><img src="https://i.imgur.com/ydURGbz.png" alt="Foo" data-canonical-src="https://i.imgur.com/ydURGbz.png" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
lakssrini/sd-class-butterflies-64 | lakssrini | "2022-12-13T23:38:23Z" | 1 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2022-12-13T23:37:35Z" | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('lakssrini/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
trieudemo11/llama_miravia_6 | trieudemo11 | "2023-09-12T14:18:31Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-12T14:18:15Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
demohong/f1b4a6be-ba02-4597-963d-628403c39556 | demohong | "2025-02-03T04:26:32Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-03T03:54:33Z" | ---
library_name: peft
license: mit
base_model: HuggingFaceH4/zephyr-7b-beta
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f1b4a6be-ba02-4597-963d-628403c39556
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceH4/zephyr-7b-beta
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0b8ec00a82ea8dc3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0b8ec00a82ea8dc3_train_data.json
type:
field_instruction: nl
field_output: cmd
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: demohong/f1b4a6be-ba02-4597-963d-628403c39556
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/0b8ec00a82ea8dc3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a3687942-dfd9-4af8-8973-e803e48dda2e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a3687942-dfd9-4af8-8973-e803e48dda2e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f1b4a6be-ba02-4597-963d-628403c39556
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.347 | 0.1874 | 200 | 1.2184 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hgnoi/t4FBIGdT2QBlhRhG | hgnoi | "2024-05-21T17:00:18Z" | 121 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-21T16:58:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
facebook/mms-tts-cme | facebook | "2023-09-01T17:07:24Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2023-09-01T17:06:54Z" |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Cerma Text-to-Speech
This repository contains the **Cerma (cme)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-cme")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-cme")
text = "some example text in the Cerma language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
ssmits/Falcon2-5.5B-multilingual-embed-base | ssmits | "2024-06-10T13:48:31Z" | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"falcon",
"ssmits/Falcon2-5.5B-multilingual",
"text-classification",
"custom_code",
"es",
"fr",
"de",
"no",
"sv",
"da",
"nl",
"pt",
"pl",
"ro",
"it",
"cs",
"base_model:ssmits/Falcon2-5.5B-multilingual",
"base_model:finetune:ssmits/Falcon2-5.5B-multilingual",
"license:apache-2.0",
"region:us"
] | text-classification | "2024-06-08T18:39:16Z" | ---
base_model:
- ssmits/Falcon2-5.5B-multilingual
library_name: sentence-transformers
tags:
- ssmits/Falcon2-5.5B-multilingual
license: apache-2.0
language:
- es
- fr
- de
- 'no'
- sv
- da
- nl
- pt
- pl
- ro
- it
- cs
pipeline_tag: text-classification
---
## Usage
Embeddings version of the base model [ssmits/Falcon2-5.5B-multilingual](https://huggingface.co/ssmits/Falcon2-5.5B-multilingual/edit/main/README.md).
The 'lm_head' layer of this model has been removed, which means it can be used for embeddings. It will not perform greatly, as it needs to be further fine-tuned, as it is pruned and shown by [intfloat/e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct).
Additionaly, in stead of a normalization layer, the hidden layers are followed up by both a classical weight and bias 1-dimensional array of 4096 values.
The basic Sentence-Transformers implementation is working correctly. This would imply other more sophisticated embeddings techniques such as adding a custom classification head, will work correctly as well.
## Inference (sentence-transformers)
```python
from sentence_transformers import SentenceTransformer
import torch
# 1. Load a pretrained Sentence Transformer model
model = SentenceTransformer("ssmits/Falcon2-5.5B-multilingual-embed-base") # device = "cpu" when <= 24 GB VRAM
# The sentences to encode
sentences = [
"The weather is lovely today.",
"It's so sunny outside!",
"He drove to the stadium.",
]
# 2. Calculate embeddings by calling model.encode()
embeddings = model.encode(sentences)
print(embeddings.shape)
# (3, 4096)
# 3. Calculate the embedding similarities
# Using torch to compute cosine similarity matrix
similarities = torch.nn.functional.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
print(similarities)
# tensor([[1.0000, 0.7120, 0.5937],
# [0.7120, 1.0000, 0.5925],
# [0.5937, 0.5925, 1.0000]])
```
Note: In my tests it utilizes more than 24GB (RTX 4090), so an A100 or A6000 would be required for inference.
## Inference (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ssmits/Falcon2-5.5B-multilingual-embed-base')
model = AutoModel.from_pretrained('ssmits/Falcon2-5.5B-multilingual-embed-base') # device = "cpu" when <= 24 GB VRAM
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
### How to enable Multi-GPU
```python
from transformers import AutoModel
from torch.nn import DataParallel
model = AutoModel.from_pretrained("ssmits/Falcon2-5.5B-multilingual-embed-base")
for module_key, module in model._modules.items():
model._modules[module_key] = DataParallel(module)
``` |
94Rachel/No.1 | 94Rachel | "2022-11-16T01:26:04Z" | 0 | 0 | null | [
"region:us"
] | null | "2022-11-16T01:22:24Z" | long hair
Sexy body
Snowflakes
Blue eyes |
kk-aivio/d620d9c9-6da9-4493-ab1d-81eba7716f4c | kk-aivio | "2025-02-16T11:34:29Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"region:us"
] | null | "2025-02-16T10:26:58Z" | ---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d620d9c9-6da9-4493-ab1d-81eba7716f4c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d620d9c9-6da9-4493-ab1d-81eba7716f4c
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Cyber-ThreaD/RoBERTa-CyNER | Cyber-ThreaD | "2023-12-06T16:58:52Z" | 6 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-12-06T16:58:11Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: dnrti_our
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dnrti_our
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0737
- Precision: 0.7870
- Recall: 0.7880
- F1: 0.7875
- Accuracy: 0.9836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.13 | 1.42 | 500 | 0.0886 | 0.7138 | 0.7548 | 0.7337 | 0.9796 |
| 0.0421 | 2.84 | 1000 | 0.0737 | 0.7870 | 0.7880 | 0.7875 | 0.9836 |
| 0.0249 | 4.26 | 1500 | 0.0855 | 0.7655 | 0.7714 | 0.7684 | 0.9822 |
| 0.0167 | 5.68 | 2000 | 0.0946 | 0.7554 | 0.8008 | 0.7774 | 0.9826 |
| 0.0104 | 7.1 | 2500 | 0.0976 | 0.7540 | 0.7829 | 0.7682 | 0.9820 |
| 0.0066 | 8.52 | 3000 | 0.1024 | 0.7742 | 0.8059 | 0.7897 | 0.9836 |
| 0.0044 | 9.94 | 3500 | 0.1069 | 0.7764 | 0.7982 | 0.7872 | 0.9833 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
jssky/d129099c-6fb3-44fe-bbd5-80d070062149 | jssky | "2025-02-13T15:58:45Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b-it",
"base_model:adapter:unsloth/codegemma-7b-it",
"license:apache-2.0",
"region:us"
] | null | "2025-02-13T14:22:59Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d129099c-6fb3-44fe-bbd5-80d070062149
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
adapter: lora
base_model: unsloth/codegemma-7b-it
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3475255fa7bf7d93_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3475255fa7bf7d93_train_data.json
type:
field_instruction: prompt
field_output: response_1
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: false
group_by_length: true
hub_model_id: jssky/d129099c-6fb3-44fe-bbd5-80d070062149
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/3475255fa7bf7d93_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: afce6915-db2a-401b-968c-d6888e8e66e4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: afce6915-db2a-401b-968c-d6888e8e66e4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d129099c-6fb3-44fe-bbd5-80d070062149
This model is a fine-tuned version of [unsloth/codegemma-7b-it](https://huggingface.co/unsloth/codegemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 881
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7125 | 0.2510 | 221 | 1.6405 |
| 1.2785 | 0.5019 | 442 | 1.5789 |
| 1.5862 | 0.7529 | 663 | 1.5099 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
earnxus/07078a21-1af0-43f6-b62e-f8c8c59dc73a | earnxus | "2025-02-07T15:15:33Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-07T14:46:32Z" | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 07078a21-1af0-43f6-b62e-f8c8c59dc73a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5d5a30f7d9d218e8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5d5a30f7d9d218e8_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: earnxus/07078a21-1af0-43f6-b62e-f8c8c59dc73a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 500
micro_batch_size: 2
mlflow_experiment_name: /tmp/5d5a30f7d9d218e8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 4f385546-fbc5-42c8-b7c4-492fbdd647bd
wandb_project: Gradients-On-Nine
wandb_run: your_name
wandb_runid: 4f385546-fbc5-42c8-b7c4-492fbdd647bd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 07078a21-1af0-43f6-b62e-f8c8c59dc73a
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 333
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2346 | 0.9985 | 332 | 1.3727 |
| 3.111 | 1.0015 | 333 | 1.3988 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
alexisbaladon/autotrain-huhu-humor-54189127188 | alexisbaladon | "2023-04-30T15:54:00Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"es",
"dataset:alexisbaladon/autotrain-data-huhu-humor",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-04-30T15:53:14Z" | ---
tags:
- autotrain
- text-classification
language:
- es
widget:
- text: "I love AutoTrain 🤗"
datasets:
- alexisbaladon/autotrain-data-huhu-humor
co2_eq_emissions:
emissions: 0.3100765073399468
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 54189127188
- CO2 Emissions (in grams): 0.3101
## Validation Metrics
- Loss: 0.426
- Accuracy: 0.835
- Precision: 0.795
- Recall: 0.710
- AUC: 0.869
- F1: 0.750
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/alexisbaladon/autotrain-huhu-humor-54189127188
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("alexisbaladon/autotrain-huhu-humor-54189127188", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("alexisbaladon/autotrain-huhu-humor-54189127188", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
tensorblock/Big-Tiger-Gemma-27B-v1-GGUF | tensorblock | "2024-11-28T18:55:25Z" | 183 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:TheDrummer/Big-Tiger-Gemma-27B-v1",
"base_model:quantized:TheDrummer/Big-Tiger-Gemma-27B-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-28T16:32:43Z" | ---
base_model: TheDrummer/Big-Tiger-Gemma-27B-v1
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## TheDrummer/Big-Tiger-Gemma-27B-v1 - GGUF
This repo contains GGUF format model files for [TheDrummer/Big-Tiger-Gemma-27B-v1](https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<bos><start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Big-Tiger-Gemma-27B-v1-Q2_K.gguf](https://huggingface.co/tensorblock/Big-Tiger-Gemma-27B-v1-GGUF/blob/main/Big-Tiger-Gemma-27B-v1-Q2_K.gguf) | Q2_K | 10.450 GB | smallest, significant quality loss - not recommended for most purposes |
| [Big-Tiger-Gemma-27B-v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Big-Tiger-Gemma-27B-v1-GGUF/blob/main/Big-Tiger-Gemma-27B-v1-Q3_K_S.gguf) | Q3_K_S | 12.169 GB | very small, high quality loss |
| [Big-Tiger-Gemma-27B-v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Big-Tiger-Gemma-27B-v1-GGUF/blob/main/Big-Tiger-Gemma-27B-v1-Q3_K_M.gguf) | Q3_K_M | 13.425 GB | very small, high quality loss |
| [Big-Tiger-Gemma-27B-v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Big-Tiger-Gemma-27B-v1-GGUF/blob/main/Big-Tiger-Gemma-27B-v1-Q3_K_L.gguf) | Q3_K_L | 14.519 GB | small, substantial quality loss |
| [Big-Tiger-Gemma-27B-v1-Q4_0.gguf](https://huggingface.co/tensorblock/Big-Tiger-Gemma-27B-v1-GGUF/blob/main/Big-Tiger-Gemma-27B-v1-Q4_0.gguf) | Q4_0 | 15.628 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Big-Tiger-Gemma-27B-v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Big-Tiger-Gemma-27B-v1-GGUF/blob/main/Big-Tiger-Gemma-27B-v1-Q4_K_S.gguf) | Q4_K_S | 15.739 GB | small, greater quality loss |
| [Big-Tiger-Gemma-27B-v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Big-Tiger-Gemma-27B-v1-GGUF/blob/main/Big-Tiger-Gemma-27B-v1-Q4_K_M.gguf) | Q4_K_M | 16.645 GB | medium, balanced quality - recommended |
| [Big-Tiger-Gemma-27B-v1-Q5_0.gguf](https://huggingface.co/tensorblock/Big-Tiger-Gemma-27B-v1-GGUF/blob/main/Big-Tiger-Gemma-27B-v1-Q5_0.gguf) | Q5_0 | 18.884 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Big-Tiger-Gemma-27B-v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Big-Tiger-Gemma-27B-v1-GGUF/blob/main/Big-Tiger-Gemma-27B-v1-Q5_K_S.gguf) | Q5_K_S | 18.884 GB | large, low quality loss - recommended |
| [Big-Tiger-Gemma-27B-v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Big-Tiger-Gemma-27B-v1-GGUF/blob/main/Big-Tiger-Gemma-27B-v1-Q5_K_M.gguf) | Q5_K_M | 19.408 GB | large, very low quality loss - recommended |
| [Big-Tiger-Gemma-27B-v1-Q6_K.gguf](https://huggingface.co/tensorblock/Big-Tiger-Gemma-27B-v1-GGUF/blob/main/Big-Tiger-Gemma-27B-v1-Q6_K.gguf) | Q6_K | 22.344 GB | very large, extremely low quality loss |
| [Big-Tiger-Gemma-27B-v1-Q8_0.gguf](https://huggingface.co/tensorblock/Big-Tiger-Gemma-27B-v1-GGUF/blob/main/Big-Tiger-Gemma-27B-v1-Q8_0.gguf) | Q8_0 | 28.937 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Big-Tiger-Gemma-27B-v1-GGUF --include "Big-Tiger-Gemma-27B-v1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Big-Tiger-Gemma-27B-v1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
Naruke/ppo-SnowballTarget | Naruke | "2023-07-27T16:08:40Z" | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2023-07-27T16:08:37Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Naruke/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sonyashijin/tinyllama_100_hippo_30k_seed_0.05_v2 | sonyashijin | "2024-12-20T16:20:50Z" | 150 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:finetune:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-20T16:20:13Z" | ---
base_model: unsloth/tinyllama-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sonyashijin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
codersan/Orca2_7b_Enlighten_V1 | codersan | "2024-01-19T12:46:50Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Orca-2-7b",
"base_model:adapter:microsoft/Orca-2-7b",
"region:us"
] | null | "2024-01-19T12:46:30Z" | ---
library_name: peft
base_model: microsoft/Orca-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
caesium94/models_colorist-v1-1e-5 | caesium94 | "2024-05-17T13:33:04Z" | 153 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-17T13:31:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
0x1202/37e94694-3d95-483f-9307-f9c1ff1ad9f9 | 0x1202 | "2025-01-26T21:56:00Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-26T21:10:31Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 37e94694-3d95-483f-9307-f9c1ff1ad9f9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 128b06698547c5af_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/128b06698547c5af_train_data.json
type:
field_input: my_solu
field_instruction: prompt
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: 0x1202/37e94694-3d95-483f-9307-f9c1ff1ad9f9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/128b06698547c5af_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9397130d-f7c8-478e-9adb-b0c4c0805184
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9397130d-f7c8-478e-9adb-b0c4c0805184
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 37e94694-3d95-483f-9307-f9c1ff1ad9f9
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9379 | 0.0001 | 1 | 1.3860 |
| 1.0127 | 0.0054 | 50 | 1.1397 |
| 0.8233 | 0.0107 | 100 | 1.0928 |
| 0.9014 | 0.0161 | 150 | 1.0680 |
| 0.88 | 0.0214 | 200 | 1.0645 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Nanashi-2x7B-bf16-GGUF | mradermacher | "2024-05-06T05:33:00Z" | 74 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"merge",
"en",
"base_model:Kquant03/Nanashi-2x7B-bf16",
"base_model:quantized:Kquant03/Nanashi-2x7B-bf16",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-03-30T15:11:13Z" | ---
base_model: Kquant03/Nanashi-2x7B-bf16
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- merge
---
## About
static quants of https://huggingface.co/Kquant03/Nanashi-2x7B-bf16
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.IQ3_XS.gguf) | IQ3_XS | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.IQ3_M.gguf) | IQ3_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.IQ4_XS.gguf) | IQ4_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.Q4_0.gguf) | Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.IQ4_NL.gguf) | IQ4_NL | 7.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.Q5_K_M.gguf) | Q5_K_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Nanashi-2x7B-bf16-GGUF/resolve/main/Nanashi-2x7B-bf16.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Nekochu/Wav2Lip | Nekochu | "2023-06-27T17:32:53Z" | 0 | 1 | null | [
"arxiv:2008.10010",
"region:us"
] | null | "2023-06-27T17:25:26Z" | Original upload: https://github.com/Rudrabha/Wav2Lip
# **Wav2Lip**: *Accurately Lip-syncing Videos In The Wild*
For commercial requests, please contact us at [email protected] or [email protected]. We have an HD model ready that can be used commercially.
This code is part of the paper: _A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild_ published at ACM Multimedia 2020.
[](https://paperswithcode.com/sota/lip-sync-on-lrs2?p=a-lip-sync-expert-is-all-you-need-for-speech)
[](https://paperswithcode.com/sota/lip-sync-on-lrs3?p=a-lip-sync-expert-is-all-you-need-for-speech)
[](https://paperswithcode.com/sota/lip-sync-on-lrw?p=a-lip-sync-expert-is-all-you-need-for-speech)
|📑 Original Paper|📰 Project Page|🌀 Demo|⚡ Live Testing|📔 Colab Notebook
|:-:|:-:|:-:|:-:|:-:|
[Paper](http://arxiv.org/abs/2008.10010) | [Project Page](http://cvit.iiit.ac.in/research/projects/cvit-projects/a-lip-sync-expert-is-all-you-need-for-speech-to-lip-generation-in-the-wild/) | [Demo Video](https://youtu.be/0fXaDCZNOJc) | [Interactive Demo](https://bhaasha.iiit.ac.in/lipsync) | [Colab Notebook](https://colab.research.google.com/drive/1tZpDWXz49W6wDcTprANRGLo2D_EbD5J8?usp=sharing) /[Updated Collab Notebook](https://colab.research.google.com/drive/1IjFW1cLevs6Ouyu4Yht4mnR4yeuMqO7Y#scrollTo=MH1m608OymLH)
<img src="https://drive.google.com/uc?export=view&id=1Wn0hPmpo4GRbCIJR8Tf20Akzdi1qjjG9"/>
----------
**Highlights**
----------
- Weights of the visual quality disc has been updated in readme!
- Lip-sync videos to any target speech with high accuracy :100:. Try our [interactive demo](https://bhaasha.iiit.ac.in/lipsync).
- :sparkles: Works for any identity, voice, and language. Also works for CGI faces and synthetic voices.
- Complete training code, inference code, and pretrained models are available :boom:
- Or, quick-start with the Google Colab Notebook: [Link](https://colab.research.google.com/drive/1tZpDWXz49W6wDcTprANRGLo2D_EbD5J8?usp=sharing). Checkpoints and samples are available in a Google Drive [folder](https://drive.google.com/drive/folders/1I-0dNLfFOSFwrfqjNa-SXuwaURHE5K4k?usp=sharing) as well. There is also a [tutorial video](https://www.youtube.com/watch?v=Ic0TBhfuOrA) on this, courtesy of [What Make Art](https://www.youtube.com/channel/UCmGXH-jy0o2CuhqtpxbaQgA). Also, thanks to [Eyal Gruss](https://eyalgruss.com), there is a more accessible [Google Colab notebook](https://j.mp/wav2lip) with more useful features. A tutorial collab notebook is present at this [link](https://colab.research.google.com/drive/1IjFW1cLevs6Ouyu4Yht4mnR4yeuMqO7Y#scrollTo=MH1m608OymLH).
- :fire: :fire: Several new, reliable evaluation benchmarks and metrics [[`evaluation/` folder of this repo]](https://github.com/Rudrabha/Wav2Lip/tree/master/evaluation) released. Instructions to calculate the metrics reported in the paper are also present.
--------
**Disclaimer**
--------
All results from this open-source code or our [demo website](https://bhaasha.iiit.ac.in/lipsync) should only be used for research/academic/personal purposes only. As the models are trained on the <a href="http://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs2.html">LRS2 dataset</a>, any form of commercial use is strictly prohibhited. For commercial requests please contact us directly!
Prerequisites
-------------
- `Python 3.6`
- ffmpeg: `sudo apt-get install ffmpeg`
- Install necessary packages using `pip install -r requirements.txt`. Alternatively, instructions for using a docker image is provided [here](https://gist.github.com/xenogenesi/e62d3d13dadbc164124c830e9c453668). Have a look at [this comment](https://github.com/Rudrabha/Wav2Lip/issues/131#issuecomment-725478562) and comment on [the gist](https://gist.github.com/xenogenesi/e62d3d13dadbc164124c830e9c453668) if you encounter any issues.
- Face detection [pre-trained model](https://www.adrianbulat.com/downloads/python-fan/s3fd-619a316812.pth) should be downloaded to `face_detection/detection/sfd/s3fd.pth`. Alternative [link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/prajwal_k_research_iiit_ac_in/EZsy6qWuivtDnANIG73iHjIBjMSoojcIV0NULXV-yiuiIg?e=qTasa8) if the above does not work.
Getting the weights
----------
| Model | Description | Link to the model |
| :-------------: | :---------------: | :---------------: |
| Wav2Lip | Highly accurate lip-sync | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/Eb3LEzbfuKlJiR600lQWRxgBIY27JZg80f7V9jtMfbNDaQ?e=TBFBVW) |
| Wav2Lip + GAN | Slightly inferior lip-sync, but better visual quality | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EdjI7bZlgApMqsVoEUUXpLsBxqXbn5z8VTmoxp55YNDcIA?e=n9ljGW) |
| Expert Discriminator | Weights of the expert discriminator | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EQRvmiZg-HRAjvI6zqN9eTEBP74KefynCwPWVmF57l-AYA?e=ZRPHKP) |
| Visual Quality Discriminator | Weights of the visual disc trained in a GAN setup | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EQVqH88dTm1HjlK11eNba5gBbn15WMS0B0EZbDBttqrqkg?e=ic0ljo) |
Lip-syncing videos using the pre-trained models (Inference)
-------
You can lip-sync any video to any audio:
```bash
python inference.py --checkpoint_path <ckpt> --face <video.mp4> --audio <an-audio-source>
```
The result is saved (by default) in `results/result_voice.mp4`. You can specify it as an argument, similar to several other available options. The audio source can be any file supported by `FFMPEG` containing audio data: `*.wav`, `*.mp3` or even a video file, from which the code will automatically extract the audio.
##### Tips for better results:
- Experiment with the `--pads` argument to adjust the detected face bounding box. Often leads to improved results. You might need to increase the bottom padding to include the chin region. E.g. `--pads 0 20 0 0`.
- If you see the mouth position dislocated or some weird artifacts such as two mouths, then it can be because of over-smoothing the face detections. Use the `--nosmooth` argument and give another try.
- Experiment with the `--resize_factor` argument, to get a lower resolution video. Why? The models are trained on faces which were at a lower resolution. You might get better, visually pleasing results for 720p videos than for 1080p videos (in many cases, the latter works well too).
- The Wav2Lip model without GAN usually needs more experimenting with the above two to get the most ideal results, and sometimes, can give you a better result as well.
Preparing LRS2 for training
----------
Our models are trained on LRS2. See [here](#training-on-datasets-other-than-lrs2) for a few suggestions regarding training on other datasets.
##### LRS2 dataset folder structure
```
data_root (mvlrs_v1)
├── main, pretrain (we use only main folder in this work)
| ├── list of folders
| │ ├── five-digit numbered video IDs ending with (.mp4)
```
Place the LRS2 filelists (train, val, test) `.txt` files in the `filelists/` folder.
##### Preprocess the dataset for fast training
```bash
python preprocess.py --data_root data_root/main --preprocessed_root lrs2_preprocessed/
```
Additional options like `batch_size` and number of GPUs to use in parallel to use can also be set.
##### Preprocessed LRS2 folder structure
```
preprocessed_root (lrs2_preprocessed)
├── list of folders
| ├── Folders with five-digit numbered video IDs
| │ ├── *.jpg
| │ ├── audio.wav
```
Train!
----------
There are two major steps: (i) Train the expert lip-sync discriminator, (ii) Train the Wav2Lip model(s).
##### Training the expert discriminator
You can download [the pre-trained weights](#getting-the-weights) if you want to skip this step. To train it:
```bash
python color_syncnet_train.py --data_root lrs2_preprocessed/ --checkpoint_dir <folder_to_save_checkpoints>
```
##### Training the Wav2Lip models
You can either train the model without the additional visual quality disriminator (< 1 day of training) or use the discriminator (~2 days). For the former, run:
```bash
python wav2lip_train.py --data_root lrs2_preprocessed/ --checkpoint_dir <folder_to_save_checkpoints> --syncnet_checkpoint_path <path_to_expert_disc_checkpoint>
```
To train with the visual quality discriminator, you should run `hq_wav2lip_train.py` instead. The arguments for both the files are similar. In both the cases, you can resume training as well. Look at `python wav2lip_train.py --help` for more details. You can also set additional less commonly-used hyper-parameters at the bottom of the `hparams.py` file.
Training on datasets other than LRS2
------------------------------------
Training on other datasets might require modifications to the code. Please read the following before you raise an issue:
- You might not get good results by training/fine-tuning on a few minutes of a single speaker. This is a separate research problem, to which we do not have a solution yet. Thus, we would most likely not be able to resolve your issue.
- You must train the expert discriminator for your own dataset before training Wav2Lip.
- If it is your own dataset downloaded from the web, in most cases, needs to be sync-corrected.
- Be mindful of the FPS of the videos of your dataset. Changes to FPS would need significant code changes.
- The expert discriminator's eval loss should go down to ~0.25 and the Wav2Lip eval sync loss should go down to ~0.2 to get good results.
When raising an issue on this topic, please let us know that you are aware of all these points.
We have an HD model trained on a dataset allowing commercial usage. The size of the generated face will be 192 x 288 in our new model.
Evaluation
----------
Please check the `evaluation/` folder for the instructions.
and Citation
----------
Theis repository can only be used for personal/research/non-commercial purposes. However, for commercial requests, please contact us directly at [email protected] or [email protected]. We have an HD model trained on a dataset allowing commercial usage. The size of the generated face will be 192 x 288 in our new model. Please cite the following paper if you use this repository:
```
@inproceedings{10.1145/3394171.3413532,
author = {Prajwal, K R and Mukhopadhyay, Rudrabha and Namboodiri, Vinay P. and Jawahar, C.V.},
title = {A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild},
year = {2020},
isbn = {9781450379885},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3394171.3413532},
doi = {10.1145/3394171.3413532},
booktitle = {Proceedings of the 28th ACM International Conference on Multimedia},
pages = {484–492},
numpages = {9},
keywords = {lip sync, talking face generation, video generation},
location = {Seattle, WA, USA},
series = {MM '20}
}
```
Acknowledgements
----------
Parts of the code structure is inspired by this [TTS repository](https://github.com/r9y9/deepvoice3_pytorch). We thank the author for this wonderful code. The code for Face Detection has been taken from the [face_alignment](https://github.com/1adrianb/face-alignment) repository. We thank the authors for releasing their code and models. We thank [zabique](https://github.com/zabique) for the tutorial collab notebook.
|
neuralmagic/bge-small-en-v1.5-sparse | neuralmagic | "2023-11-13T18:23:24Z" | 377 | 4 | transformers | [
"transformers",
"onnx",
"bert",
"feature-extraction",
"mteb",
"sparse sparsity quantized onnx embeddings int8",
"en",
"license:mit",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2023-09-21T13:21:02Z" | ---
tags:
- mteb
- sparse sparsity quantized onnx embeddings int8
model-index:
- name: bge-small-en-v1.5-sparse
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 70.71641791044776
- type: ap
value: 32.850850647310004
- type: f1
value: 64.48101916414805
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 83.33962500000001
- type: ap
value: 78.28706349240106
- type: f1
value: 83.27426715603062
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.988
- type: f1
value: 40.776679545648506
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.101999999999997
- type: map_at_10
value: 40.754000000000005
- type: map_at_100
value: 41.83
- type: map_at_1000
value: 41.845
- type: map_at_3
value: 36.178
- type: map_at_5
value: 38.646
- type: mrr_at_1
value: 26.6
- type: mrr_at_10
value: 40.934
- type: mrr_at_100
value: 42.015
- type: mrr_at_1000
value: 42.03
- type: mrr_at_3
value: 36.344
- type: mrr_at_5
value: 38.848
- type: ndcg_at_1
value: 26.101999999999997
- type: ndcg_at_10
value: 49.126999999999995
- type: ndcg_at_100
value: 53.815999999999995
- type: ndcg_at_1000
value: 54.178000000000004
- type: ndcg_at_3
value: 39.607
- type: ndcg_at_5
value: 44.086999999999996
- type: precision_at_1
value: 26.101999999999997
- type: precision_at_10
value: 7.596
- type: precision_at_100
value: 0.967
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 16.524
- type: precision_at_5
value: 12.105
- type: recall_at_1
value: 26.101999999999997
- type: recall_at_10
value: 75.96000000000001
- type: recall_at_100
value: 96.65700000000001
- type: recall_at_1000
value: 99.431
- type: recall_at_3
value: 49.573
- type: recall_at_5
value: 60.526
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 43.10651535441929
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 34.41095293826606
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 56.96575970919239
- type: mrr
value: 69.92503187794047
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 79.64892774481326
- type: cos_sim_spearman
value: 78.953003817029
- type: euclidean_pearson
value: 78.92456838230683
- type: euclidean_spearman
value: 78.56504316985354
- type: manhattan_pearson
value: 79.21436359014227
- type: manhattan_spearman
value: 78.66263575501259
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 81.25
- type: f1
value: 81.20841448916138
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 34.69545244587236
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 28.84301739171936
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.401
- type: map_at_10
value: 32.451
- type: map_at_100
value: 33.891
- type: map_at_1000
value: 34.01
- type: map_at_3
value: 29.365999999999996
- type: map_at_5
value: 31.240000000000002
- type: mrr_at_1
value: 29.9
- type: mrr_at_10
value: 38.590999999999994
- type: mrr_at_100
value: 39.587
- type: mrr_at_1000
value: 39.637
- type: mrr_at_3
value: 36.028
- type: mrr_at_5
value: 37.673
- type: ndcg_at_1
value: 29.9
- type: ndcg_at_10
value: 38.251000000000005
- type: ndcg_at_100
value: 44.354
- type: ndcg_at_1000
value: 46.642
- type: ndcg_at_3
value: 33.581
- type: ndcg_at_5
value: 35.96
- type: precision_at_1
value: 29.9
- type: precision_at_10
value: 7.439
- type: precision_at_100
value: 1.28
- type: precision_at_1000
value: 0.17700000000000002
- type: precision_at_3
value: 16.404
- type: precision_at_5
value: 12.046
- type: recall_at_1
value: 23.401
- type: recall_at_10
value: 49.305
- type: recall_at_100
value: 75.885
- type: recall_at_1000
value: 90.885
- type: recall_at_3
value: 35.341
- type: recall_at_5
value: 42.275
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.103
- type: map_at_10
value: 29.271
- type: map_at_100
value: 30.151
- type: map_at_1000
value: 30.276999999999997
- type: map_at_3
value: 27.289
- type: map_at_5
value: 28.236
- type: mrr_at_1
value: 26.943
- type: mrr_at_10
value: 33.782000000000004
- type: mrr_at_100
value: 34.459
- type: mrr_at_1000
value: 34.525
- type: mrr_at_3
value: 31.985000000000003
- type: mrr_at_5
value: 32.909
- type: ndcg_at_1
value: 26.943
- type: ndcg_at_10
value: 33.616
- type: ndcg_at_100
value: 37.669000000000004
- type: ndcg_at_1000
value: 40.247
- type: ndcg_at_3
value: 30.482
- type: ndcg_at_5
value: 31.615
- type: precision_at_1
value: 26.943
- type: precision_at_10
value: 6.146
- type: precision_at_100
value: 1.038
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 14.521999999999998
- type: precision_at_5
value: 10.038
- type: recall_at_1
value: 22.103
- type: recall_at_10
value: 41.754999999999995
- type: recall_at_100
value: 59.636
- type: recall_at_1000
value: 76.801
- type: recall_at_3
value: 32.285000000000004
- type: recall_at_5
value: 35.684
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.565
- type: map_at_10
value: 43.07
- type: map_at_100
value: 44.102999999999994
- type: map_at_1000
value: 44.175
- type: map_at_3
value: 40.245
- type: map_at_5
value: 41.71
- type: mrr_at_1
value: 37.429
- type: mrr_at_10
value: 46.358
- type: mrr_at_100
value: 47.146
- type: mrr_at_1000
value: 47.187
- type: mrr_at_3
value: 44.086
- type: mrr_at_5
value: 45.318000000000005
- type: ndcg_at_1
value: 37.429
- type: ndcg_at_10
value: 48.398
- type: ndcg_at_100
value: 52.90899999999999
- type: ndcg_at_1000
value: 54.478
- type: ndcg_at_3
value: 43.418
- type: ndcg_at_5
value: 45.578
- type: precision_at_1
value: 37.429
- type: precision_at_10
value: 7.856000000000001
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.129
- type: precision_at_3
value: 19.331
- type: precision_at_5
value: 13.191
- type: recall_at_1
value: 32.565
- type: recall_at_10
value: 61.021
- type: recall_at_100
value: 81.105
- type: recall_at_1000
value: 92.251
- type: recall_at_3
value: 47.637
- type: recall_at_5
value: 52.871
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.108
- type: map_at_10
value: 24.613
- type: map_at_100
value: 25.624000000000002
- type: map_at_1000
value: 25.721
- type: map_at_3
value: 22.271
- type: map_at_5
value: 23.681
- type: mrr_at_1
value: 19.435
- type: mrr_at_10
value: 26.124000000000002
- type: mrr_at_100
value: 27.07
- type: mrr_at_1000
value: 27.145999999999997
- type: mrr_at_3
value: 23.748
- type: mrr_at_5
value: 25.239
- type: ndcg_at_1
value: 19.435
- type: ndcg_at_10
value: 28.632
- type: ndcg_at_100
value: 33.988
- type: ndcg_at_1000
value: 36.551
- type: ndcg_at_3
value: 24.035999999999998
- type: ndcg_at_5
value: 26.525
- type: precision_at_1
value: 19.435
- type: precision_at_10
value: 4.565
- type: precision_at_100
value: 0.771
- type: precision_at_1000
value: 0.10200000000000001
- type: precision_at_3
value: 10.169
- type: precision_at_5
value: 7.571
- type: recall_at_1
value: 18.108
- type: recall_at_10
value: 39.533
- type: recall_at_100
value: 64.854
- type: recall_at_1000
value: 84.421
- type: recall_at_3
value: 27.500000000000004
- type: recall_at_5
value: 33.314
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.087
- type: map_at_10
value: 17.323
- type: map_at_100
value: 18.569
- type: map_at_1000
value: 18.694
- type: map_at_3
value: 15.370000000000001
- type: map_at_5
value: 16.538
- type: mrr_at_1
value: 13.557
- type: mrr_at_10
value: 21.041
- type: mrr_at_100
value: 22.134
- type: mrr_at_1000
value: 22.207
- type: mrr_at_3
value: 18.843
- type: mrr_at_5
value: 20.236
- type: ndcg_at_1
value: 13.557
- type: ndcg_at_10
value: 21.571
- type: ndcg_at_100
value: 27.678000000000004
- type: ndcg_at_1000
value: 30.8
- type: ndcg_at_3
value: 17.922
- type: ndcg_at_5
value: 19.826
- type: precision_at_1
value: 13.557
- type: precision_at_10
value: 4.1290000000000004
- type: precision_at_100
value: 0.8370000000000001
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 8.914
- type: precision_at_5
value: 6.691999999999999
- type: recall_at_1
value: 11.087
- type: recall_at_10
value: 30.94
- type: recall_at_100
value: 57.833999999999996
- type: recall_at_1000
value: 80.365
- type: recall_at_3
value: 20.854
- type: recall_at_5
value: 25.695
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.708
- type: map_at_10
value: 30.422
- type: map_at_100
value: 31.713
- type: map_at_1000
value: 31.842
- type: map_at_3
value: 27.424
- type: map_at_5
value: 29.17
- type: mrr_at_1
value: 26.756
- type: mrr_at_10
value: 35.304
- type: mrr_at_100
value: 36.296
- type: mrr_at_1000
value: 36.359
- type: mrr_at_3
value: 32.692
- type: mrr_at_5
value: 34.288999999999994
- type: ndcg_at_1
value: 26.756
- type: ndcg_at_10
value: 35.876000000000005
- type: ndcg_at_100
value: 41.708
- type: ndcg_at_1000
value: 44.359
- type: ndcg_at_3
value: 30.946
- type: ndcg_at_5
value: 33.404
- type: precision_at_1
value: 26.756
- type: precision_at_10
value: 6.795
- type: precision_at_100
value: 1.138
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 15.046999999999999
- type: precision_at_5
value: 10.972
- type: recall_at_1
value: 21.708
- type: recall_at_10
value: 47.315000000000005
- type: recall_at_100
value: 72.313
- type: recall_at_1000
value: 90.199
- type: recall_at_3
value: 33.528999999999996
- type: recall_at_5
value: 39.985
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.902
- type: map_at_10
value: 26.166
- type: map_at_100
value: 27.368
- type: map_at_1000
value: 27.493000000000002
- type: map_at_3
value: 23.505000000000003
- type: map_at_5
value: 25.019000000000002
- type: mrr_at_1
value: 23.402
- type: mrr_at_10
value: 30.787
- type: mrr_at_100
value: 31.735000000000003
- type: mrr_at_1000
value: 31.806
- type: mrr_at_3
value: 28.33
- type: mrr_at_5
value: 29.711
- type: ndcg_at_1
value: 23.402
- type: ndcg_at_10
value: 30.971
- type: ndcg_at_100
value: 36.61
- type: ndcg_at_1000
value: 39.507999999999996
- type: ndcg_at_3
value: 26.352999999999998
- type: ndcg_at_5
value: 28.488000000000003
- type: precision_at_1
value: 23.402
- type: precision_at_10
value: 5.799
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_3
value: 12.633
- type: precision_at_5
value: 9.269
- type: recall_at_1
value: 18.902
- type: recall_at_10
value: 40.929
- type: recall_at_100
value: 65.594
- type: recall_at_1000
value: 85.961
- type: recall_at_3
value: 28.121000000000002
- type: recall_at_5
value: 33.638
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.168
- type: map_at_10
value: 25.142999999999997
- type: map_at_100
value: 25.993
- type: map_at_1000
value: 26.076
- type: map_at_3
value: 23.179
- type: map_at_5
value: 24.322
- type: mrr_at_1
value: 21.933
- type: mrr_at_10
value: 27.72
- type: mrr_at_100
value: 28.518
- type: mrr_at_1000
value: 28.582
- type: mrr_at_3
value: 25.791999999999998
- type: mrr_at_5
value: 26.958
- type: ndcg_at_1
value: 21.933
- type: ndcg_at_10
value: 28.866999999999997
- type: ndcg_at_100
value: 33.285
- type: ndcg_at_1000
value: 35.591
- type: ndcg_at_3
value: 25.202999999999996
- type: ndcg_at_5
value: 27.045
- type: precision_at_1
value: 21.933
- type: precision_at_10
value: 4.632
- type: precision_at_100
value: 0.733
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 10.992
- type: precision_at_5
value: 7.853000000000001
- type: recall_at_1
value: 19.168
- type: recall_at_10
value: 37.899
- type: recall_at_100
value: 58.54899999999999
- type: recall_at_1000
value: 75.666
- type: recall_at_3
value: 27.831
- type: recall_at_5
value: 32.336
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.764000000000001
- type: map_at_10
value: 17.757
- type: map_at_100
value: 18.677
- type: map_at_1000
value: 18.813
- type: map_at_3
value: 16.151
- type: map_at_5
value: 16.946
- type: mrr_at_1
value: 15.726
- type: mrr_at_10
value: 21.019
- type: mrr_at_100
value: 21.856
- type: mrr_at_1000
value: 21.954
- type: mrr_at_3
value: 19.282
- type: mrr_at_5
value: 20.189
- type: ndcg_at_1
value: 15.726
- type: ndcg_at_10
value: 21.259
- type: ndcg_at_100
value: 25.868999999999996
- type: ndcg_at_1000
value: 29.425
- type: ndcg_at_3
value: 18.204
- type: ndcg_at_5
value: 19.434
- type: precision_at_1
value: 15.726
- type: precision_at_10
value: 3.8920000000000003
- type: precision_at_100
value: 0.741
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 8.58
- type: precision_at_5
value: 6.132
- type: recall_at_1
value: 12.764000000000001
- type: recall_at_10
value: 28.639
- type: recall_at_100
value: 49.639
- type: recall_at_1000
value: 75.725
- type: recall_at_3
value: 19.883
- type: recall_at_5
value: 23.141000000000002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.98
- type: map_at_10
value: 25.2
- type: map_at_100
value: 26.279000000000003
- type: map_at_1000
value: 26.399
- type: map_at_3
value: 23.399
- type: map_at_5
value: 24.284
- type: mrr_at_1
value: 22.015
- type: mrr_at_10
value: 28.555000000000003
- type: mrr_at_100
value: 29.497
- type: mrr_at_1000
value: 29.574
- type: mrr_at_3
value: 26.788
- type: mrr_at_5
value: 27.576
- type: ndcg_at_1
value: 22.015
- type: ndcg_at_10
value: 29.266
- type: ndcg_at_100
value: 34.721000000000004
- type: ndcg_at_1000
value: 37.659
- type: ndcg_at_3
value: 25.741000000000003
- type: ndcg_at_5
value: 27.044
- type: precision_at_1
value: 22.015
- type: precision_at_10
value: 4.897
- type: precision_at_100
value: 0.8540000000000001
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 11.567
- type: precision_at_5
value: 7.9479999999999995
- type: recall_at_1
value: 18.98
- type: recall_at_10
value: 38.411
- type: recall_at_100
value: 63.164
- type: recall_at_1000
value: 84.292
- type: recall_at_3
value: 28.576
- type: recall_at_5
value: 31.789
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.372
- type: map_at_10
value: 27.161
- type: map_at_100
value: 28.364
- type: map_at_1000
value: 28.554000000000002
- type: map_at_3
value: 25.135
- type: map_at_5
value: 26.200000000000003
- type: mrr_at_1
value: 24.704
- type: mrr_at_10
value: 31.219
- type: mrr_at_100
value: 32.092
- type: mrr_at_1000
value: 32.181
- type: mrr_at_3
value: 29.282000000000004
- type: mrr_at_5
value: 30.359
- type: ndcg_at_1
value: 24.704
- type: ndcg_at_10
value: 31.622
- type: ndcg_at_100
value: 36.917
- type: ndcg_at_1000
value: 40.357
- type: ndcg_at_3
value: 28.398
- type: ndcg_at_5
value: 29.764000000000003
- type: precision_at_1
value: 24.704
- type: precision_at_10
value: 5.81
- type: precision_at_100
value: 1.208
- type: precision_at_1000
value: 0.209
- type: precision_at_3
value: 13.241
- type: precision_at_5
value: 9.407
- type: recall_at_1
value: 20.372
- type: recall_at_10
value: 40.053
- type: recall_at_100
value: 64.71000000000001
- type: recall_at_1000
value: 87.607
- type: recall_at_3
value: 29.961
- type: recall_at_5
value: 34.058
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.424000000000001
- type: map_at_10
value: 20.541999999999998
- type: map_at_100
value: 21.495
- type: map_at_1000
value: 21.604
- type: map_at_3
value: 18.608
- type: map_at_5
value: 19.783
- type: mrr_at_1
value: 15.895999999999999
- type: mrr_at_10
value: 22.484
- type: mrr_at_100
value: 23.376
- type: mrr_at_1000
value: 23.467
- type: mrr_at_3
value: 20.548
- type: mrr_at_5
value: 21.731
- type: ndcg_at_1
value: 15.895999999999999
- type: ndcg_at_10
value: 24.343
- type: ndcg_at_100
value: 29.181
- type: ndcg_at_1000
value: 32.330999999999996
- type: ndcg_at_3
value: 20.518
- type: ndcg_at_5
value: 22.561999999999998
- type: precision_at_1
value: 15.895999999999999
- type: precision_at_10
value: 3.9739999999999998
- type: precision_at_100
value: 0.6799999999999999
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 9.057
- type: precision_at_5
value: 6.654
- type: recall_at_1
value: 14.424000000000001
- type: recall_at_10
value: 34.079
- type: recall_at_100
value: 56.728
- type: recall_at_1000
value: 80.765
- type: recall_at_3
value: 23.993000000000002
- type: recall_at_5
value: 28.838
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 41.665
- type: f1
value: 37.601137843331244
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 74.8052
- type: ap
value: 68.92588517572685
- type: f1
value: 74.66801685854456
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.2220702234382
- type: f1
value: 90.81687856852439
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 69.39124487004105
- type: f1
value: 51.8350043424968
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.80497646267652
- type: f1
value: 67.34213899244814
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.54270342972428
- type: f1
value: 74.02802500235784
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 30.488580544269002
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.80426879476371
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.37970068676043
- type: mrr
value: 32.48523694064166
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 42.862710845031565
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 54.270000736385626
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 80.89215288990194
- type: cos_sim_spearman
value: 74.386413188675
- type: euclidean_pearson
value: 78.83679563989534
- type: euclidean_spearman
value: 74.29328198771996
- type: manhattan_pearson
value: 78.77968796707641
- type: manhattan_spearman
value: 74.20887429784696
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 78.31858821914498
- type: cos_sim_spearman
value: 72.2217008523832
- type: euclidean_pearson
value: 75.38901061978429
- type: euclidean_spearman
value: 71.81255767675184
- type: manhattan_pearson
value: 75.49472202181288
- type: manhattan_spearman
value: 71.96322588726144
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 79.48334648997455
- type: cos_sim_spearman
value: 80.99654029572798
- type: euclidean_pearson
value: 80.46546523970035
- type: euclidean_spearman
value: 80.90646216980744
- type: manhattan_pearson
value: 80.35474057857608
- type: manhattan_spearman
value: 80.8141299909659
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.73826970784727
- type: cos_sim_spearman
value: 76.9926870133034
- type: euclidean_pearson
value: 79.6386542120984
- type: euclidean_spearman
value: 77.05041986942253
- type: manhattan_pearson
value: 79.61799508502459
- type: manhattan_spearman
value: 77.07169617647067
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 83.93999019426069
- type: cos_sim_spearman
value: 85.21166521594695
- type: euclidean_pearson
value: 84.97207676326357
- type: euclidean_spearman
value: 85.40726578482739
- type: manhattan_pearson
value: 85.0386693192183
- type: manhattan_spearman
value: 85.49230945586409
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 80.8133974034008
- type: cos_sim_spearman
value: 82.82919022688844
- type: euclidean_pearson
value: 81.92587923760179
- type: euclidean_spearman
value: 82.86629450518863
- type: manhattan_pearson
value: 81.98232365999253
- type: manhattan_spearman
value: 82.94313939920296
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.12872422642363
- type: cos_sim_spearman
value: 87.77672179979807
- type: euclidean_pearson
value: 87.76172961705947
- type: euclidean_spearman
value: 87.9891393339215
- type: manhattan_pearson
value: 87.78863663568221
- type: manhattan_spearman
value: 88.08297053203866
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.82824030232733
- type: cos_sim_spearman
value: 64.17079382633538
- type: euclidean_pearson
value: 61.31505225602925
- type: euclidean_spearman
value: 64.05080034530694
- type: manhattan_pearson
value: 61.77095758943306
- type: manhattan_spearman
value: 64.14475973774933
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 81.39239803497064
- type: cos_sim_spearman
value: 81.76637354520439
- type: euclidean_pearson
value: 82.98008209033587
- type: euclidean_spearman
value: 82.18662536188657
- type: manhattan_pearson
value: 82.9630328314908
- type: manhattan_spearman
value: 82.13726553603003
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.45753132898741
- type: mrr
value: 93.84029822755313
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8019801980198
- type: cos_sim_ap
value: 94.58629018512772
- type: cos_sim_f1
value: 89.84771573604061
- type: cos_sim_precision
value: 91.23711340206185
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.74950495049505
- type: dot_ap
value: 92.5761214576951
- type: dot_f1
value: 87.09841917389087
- type: dot_precision
value: 88.86576482830385
- type: dot_recall
value: 85.39999999999999
- type: euclidean_accuracy
value: 99.80495049504951
- type: euclidean_ap
value: 94.56231673602272
- type: euclidean_f1
value: 90.02531645569621
- type: euclidean_precision
value: 91.17948717948718
- type: euclidean_recall
value: 88.9
- type: manhattan_accuracy
value: 99.8009900990099
- type: manhattan_ap
value: 94.5775591647447
- type: manhattan_f1
value: 89.86384266263238
- type: manhattan_precision
value: 90.64089521871821
- type: manhattan_recall
value: 89.1
- type: max_accuracy
value: 99.80495049504951
- type: max_ap
value: 94.58629018512772
- type: max_f1
value: 90.02531645569621
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 53.088941385715735
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.146129414825744
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 48.7511362739003
- type: mrr
value: 49.61682210763093
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 67.43820000000001
- type: ap
value: 12.899489312331003
- type: f1
value: 52.03468121072981
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 57.475947934352
- type: f1
value: 57.77676730676238
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 38.3463456299738
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.94230196101806
- type: cos_sim_ap
value: 67.00916556336148
- type: cos_sim_f1
value: 63.046014257939085
- type: cos_sim_precision
value: 61.961783439490446
- type: cos_sim_recall
value: 64.16886543535621
- type: dot_accuracy
value: 83.18531322644095
- type: dot_ap
value: 63.112896030267066
- type: dot_f1
value: 59.06565656565657
- type: dot_precision
value: 56.63438256658596
- type: dot_recall
value: 61.715039577836414
- type: euclidean_accuracy
value: 83.94230196101806
- type: euclidean_ap
value: 67.19856676674463
- type: euclidean_f1
value: 63.08428413691571
- type: euclidean_precision
value: 58.9543682641596
- type: euclidean_recall
value: 67.83641160949868
- type: manhattan_accuracy
value: 83.91845979614949
- type: manhattan_ap
value: 66.9845327263072
- type: manhattan_f1
value: 62.693323274236135
- type: manhattan_precision
value: 59.884698534710544
- type: manhattan_recall
value: 65.77836411609499
- type: max_accuracy
value: 83.94230196101806
- type: max_ap
value: 67.19856676674463
- type: max_f1
value: 63.08428413691571
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.0777738968448
- type: cos_sim_ap
value: 84.19747786536
- type: cos_sim_f1
value: 75.91830995817077
- type: cos_sim_precision
value: 69.84671107949033
- type: cos_sim_recall
value: 83.14598090545118
- type: dot_accuracy
value: 87.14246904955951
- type: dot_ap
value: 82.37528804640529
- type: dot_f1
value: 74.40963166732163
- type: dot_precision
value: 69.4127841098447
- type: dot_recall
value: 80.18170619032954
- type: euclidean_accuracy
value: 88.08359529630924
- type: euclidean_ap
value: 84.22633217661986
- type: euclidean_f1
value: 76.09190339866403
- type: euclidean_precision
value: 72.70304390517605
- type: euclidean_recall
value: 79.81213427779488
- type: manhattan_accuracy
value: 88.08359529630924
- type: manhattan_ap
value: 84.18362004611083
- type: manhattan_f1
value: 76.08789625360231
- type: manhattan_precision
value: 71.49336582724072
- type: manhattan_recall
value: 81.3135201724669
- type: max_accuracy
value: 88.08359529630924
- type: max_ap
value: 84.22633217661986
- type: max_f1
value: 76.09190339866403
license: mit
language:
- en
---
# bge-small-en-v1.5-sparse
## Usage
This is the sparse ONNX variant of the [bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) embeddings model accelerated with [Sparsify](https://github.com/neuralmagic/sparsify) for quantization/pruning and [DeepSparseSentenceTransformers](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/sentence_transformers) for inference.
```bash
pip install -U deepsparse-nightly[sentence_transformers]
```
```python
from deepsparse.sentence_transformers import DeepSparseSentenceTransformer
model = DeepSparseSentenceTransformer('neuralmagic/bge-small-en-v1.5-sparse', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
```
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ). |
facebook/mms-tts-bim | facebook | "2023-09-01T14:25:26Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2023-09-01T14:25:09Z" |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Bimoba Text-to-Speech
This repository contains the **Bimoba (bim)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-bim")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-bim")
text = "some example text in the Bimoba language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
mrinaldi86/ppo-LunarLander-v3 | mrinaldi86 | "2025-03-01T10:15:39Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2025-03-01T06:51:33Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: 76.90 +/- 129.35
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v3
This is a trained model of a PPO agent playing LunarLander-v3.
# Hyperparameters
|
fmcurti/whisper-small-minds14 | fmcurti | "2023-10-05T01:42:20Z" | 75 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-10-05T00:46:12Z" | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
model-index:
- name: whisper-small-minds14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| No log | 0.34 | 10 | 0.6913 | 0.2721 | 0.2786 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
jlbaker361/dcgan-lazy-wikiart500-resized-cond | jlbaker361 | "2024-02-01T20:18:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-02-01T19:48:12Z" | ---
{}
---
Creative Adversarial Network
epochs: 2
dataset jlbaker361/wikiart-balanced500
n classes 27
batch_size 4
images where resized to 768
and then center cropped to: 512
used clip=False
conditional =True
discriminator parameters:
init_dim: 32
final_dim 512
generator parameters:
input noise_dim: 100
|
dimasik2987/5b8b6377-d82b-4422-8212-c8996c66d55b | dimasik2987 | "2025-01-17T02:44:06Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-17T02:36:34Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5b8b6377-d82b-4422-8212-c8996c66d55b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 47a6f208ef44bbdb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/47a6f208ef44bbdb_train_data.json
type:
field_instruction: detail
field_output: aa_seq
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: dimasik2987/5b8b6377-d82b-4422-8212-c8996c66d55b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/47a6f208ef44bbdb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8edada43-681e-4176-9009-d7a6d87b92b9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8edada43-681e-4176-9009-d7a6d87b92b9
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5b8b6377-d82b-4422-8212-c8996c66d55b
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0006 | 5 | nan |
| 0.0 | 0.0012 | 10 | nan |
| 0.0 | 0.0018 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Karajan42/gama_router_olly_v1 | Karajan42 | "2024-06-04T06:59:35Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:creativeml-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-04T06:40:52Z" | ---
license: creativeml-openrail-m
---
|
relaxml/Llama-3.1-405B-Instruct-QTIP-2Bit-TP8 | relaxml | "2024-10-28T02:49:19Z" | 5 | 1 | null | [
"safetensors",
"llama",
"region:us"
] | null | "2024-10-19T20:09:38Z" | This model is compatible with tensor parallelism. The RHT runs per-GPU instead of across GPUs. q, k, v, up, and gate are split along the output channel, and o and down are split along the input channel.
This model has slightly worse quality than the non "TP8" model. |
migueldeguzmandev/GPT2XL-RLLM-19 | migueldeguzmandev | "2025-02-11T18:40:09Z" | 72 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-25T12:19:24Z" | **Repository for GPT2XL-RLLM-19 Model; feel free to use / download the files for research purposes.**
**Related post:** [Reinforcement Learning using Layered Morphology (RLLM)](https://www.lesswrong.com/posts/GrxaMeekGKK6WKwmm/rl-for-safety-work-or-just-clever-rl-reinforcement-learning?utm_campaign=post_share&utm_source=link) |
ChakuChidiya/distilbert-base-uncased-G3 | ChakuChidiya | "2024-04-24T13:34:58Z" | 62 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:ChakuChidiya/distilbert-base-uncased-G2",
"base_model:finetune:ChakuChidiya/distilbert-base-uncased-G2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-04-24T07:31:05Z" | ---
license: apache-2.0
base_model: ChakuChidiya/distilbert-base-uncased-G2
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-base-uncased-G3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-G3
This model is a fine-tuned version of [ChakuChidiya/distilbert-base-uncased-G2](https://huggingface.co/ChakuChidiya/distilbert-base-uncased-G2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2192
- Validation Loss: 0.3240
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1920, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.07}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3628 | 0.3204 | 0 |
| 0.2708 | 0.3328 | 1 |
| 0.2192 | 0.3240 | 2 |
### Framework versions
- Transformers 4.37.0
- TensorFlow 2.15.0
- Datasets 2.14.5
- Tokenizers 0.15.1
|
Subsets and Splits