modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 18:27:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 18:27:32
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
eventdata-utd/ConfliBERT-cont-cased-BBC_News | eventdata-utd | 2024-05-20T21:26:26Z | 0 | 0 | null | [
"en",
"region:us"
] | null | 2024-04-04T20:05:47Z | ---
language:
- en
---
Political news dataset from BBC for relevance classification. |
sxg2520/tiny-chatbot-dpo | sxg2520 | 2024-05-20T21:25:59Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T21:23:56Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: tiny-chatbot-dpo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-chatbot-dpo
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
yifanxie/jasper-cicada | yifanxie | 2024-05-20T21:19:32Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-20T21:17:28Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.40.1
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="yifanxie/jasper-cicada",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 256
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<eos><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"yifanxie/jasper-cicada",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"yifanxie/jasper-cicada",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 256
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "yifanxie/jasper-cicada" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<eos><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
# model.generation_config.min_new_tokens = 2
# model.generation_config.max_new_tokens = 256
# model.generation_config.do_sample = False
# model.generation_config.num_beams = 1
# model.generation_config.temperature = float(0.0)
# model.generation_config.repetition_penalty = float(1.0)
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
GemmaForCausalLM(
(model): GemmaModel(
(embed_tokens): Embedding(256000, 2048, padding_idx=0)
(layers): ModuleList(
(0-17): 18 x GemmaDecoderLayer(
(self_attn): GemmaSdpaAttention(
(q_proj): Linear(in_features=2048, out_features=2048, bias=False)
(k_proj): Linear(in_features=2048, out_features=256, bias=False)
(v_proj): Linear(in_features=2048, out_features=256, bias=False)
(o_proj): Linear(in_features=2048, out_features=2048, bias=False)
(rotary_emb): GemmaRotaryEmbedding()
)
(mlp): GemmaMLP(
(gate_proj): Linear(in_features=2048, out_features=16384, bias=False)
(up_proj): Linear(in_features=2048, out_features=16384, bias=False)
(down_proj): Linear(in_features=16384, out_features=2048, bias=False)
(act_fn): PytorchGELUTanh()
)
(input_layernorm): GemmaRMSNorm()
(post_attention_layernorm): GemmaRMSNorm()
)
)
(norm): GemmaRMSNorm()
)
(lm_head): Linear(in_features=2048, out_features=256000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
matthieuzone/PECORINObis | matthieuzone | 2024-05-20T21:16:23Z | 4 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T21:08:02Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/PECORINObis
<Gallery />
## Model description
These are matthieuzone/PECORINObis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/PECORINObis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Triangles/gpt-neo-125m-finetuned-philosopher_rave_100 | Triangles | 2024-05-20T21:14:19Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-04T15:55:58Z | ---
license: mit
tags:
- generated_from_trainer
base_model: EleutherAI/gpt-neo-125m
model-index:
- name: gpt-neo-125m-finetuned-philosopher_rave_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125m-finetuned-philosopher_rave_100
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 155 | 2.6967 |
| No log | 2.0 | 310 | 2.6846 |
| No log | 3.0 | 465 | 2.6733 |
| 2.6891 | 4.0 | 620 | 2.6626 |
| 2.6891 | 5.0 | 775 | 2.6524 |
| 2.6891 | 6.0 | 930 | 2.6427 |
| 2.6569 | 7.0 | 1085 | 2.6336 |
| 2.6569 | 8.0 | 1240 | 2.6248 |
| 2.6569 | 9.0 | 1395 | 2.6164 |
| 2.6215 | 10.0 | 1550 | 2.6083 |
| 2.6215 | 11.0 | 1705 | 2.6005 |
| 2.6215 | 12.0 | 1860 | 2.5931 |
| 2.6022 | 13.0 | 2015 | 2.5858 |
| 2.6022 | 14.0 | 2170 | 2.5789 |
| 2.6022 | 15.0 | 2325 | 2.5721 |
| 2.6022 | 16.0 | 2480 | 2.5657 |
| 2.5777 | 17.0 | 2635 | 2.5594 |
| 2.5777 | 18.0 | 2790 | 2.5532 |
| 2.5777 | 19.0 | 2945 | 2.5473 |
| 2.5548 | 20.0 | 3100 | 2.5416 |
| 2.5548 | 21.0 | 3255 | 2.5360 |
| 2.5548 | 22.0 | 3410 | 2.5306 |
| 2.5359 | 23.0 | 3565 | 2.5253 |
| 2.5359 | 24.0 | 3720 | 2.5202 |
| 2.5359 | 25.0 | 3875 | 2.5152 |
| 2.5248 | 26.0 | 4030 | 2.5103 |
| 2.5248 | 27.0 | 4185 | 2.5056 |
| 2.5248 | 28.0 | 4340 | 2.5011 |
| 2.5248 | 29.0 | 4495 | 2.4966 |
| 2.5053 | 30.0 | 4650 | 2.4922 |
| 2.5053 | 31.0 | 4805 | 2.4880 |
| 2.5053 | 32.0 | 4960 | 2.4839 |
| 2.4871 | 33.0 | 5115 | 2.4798 |
| 2.4871 | 34.0 | 5270 | 2.4759 |
| 2.4871 | 35.0 | 5425 | 2.4721 |
| 2.4808 | 36.0 | 5580 | 2.4683 |
| 2.4808 | 37.0 | 5735 | 2.4647 |
| 2.4808 | 38.0 | 5890 | 2.4612 |
| 2.4659 | 39.0 | 6045 | 2.4577 |
| 2.4659 | 40.0 | 6200 | 2.4544 |
| 2.4659 | 41.0 | 6355 | 2.4511 |
| 2.4517 | 42.0 | 6510 | 2.4479 |
| 2.4517 | 43.0 | 6665 | 2.4447 |
| 2.4517 | 44.0 | 6820 | 2.4417 |
| 2.4517 | 45.0 | 6975 | 2.4387 |
| 2.4466 | 46.0 | 7130 | 2.4359 |
| 2.4466 | 47.0 | 7285 | 2.4330 |
| 2.4466 | 48.0 | 7440 | 2.4303 |
| 2.4348 | 49.0 | 7595 | 2.4276 |
| 2.4348 | 50.0 | 7750 | 2.4250 |
| 2.4348 | 51.0 | 7905 | 2.4225 |
| 2.4238 | 52.0 | 8060 | 2.4201 |
| 2.4238 | 53.0 | 8215 | 2.4177 |
| 2.4238 | 54.0 | 8370 | 2.4154 |
| 2.4172 | 55.0 | 8525 | 2.4131 |
| 2.4172 | 56.0 | 8680 | 2.4109 |
| 2.4172 | 57.0 | 8835 | 2.4088 |
| 2.4172 | 58.0 | 8990 | 2.4067 |
| 2.4097 | 59.0 | 9145 | 2.4047 |
| 2.4097 | 60.0 | 9300 | 2.4027 |
| 2.4097 | 61.0 | 9455 | 2.4008 |
| 2.4054 | 62.0 | 9610 | 2.3990 |
| 2.4054 | 63.0 | 9765 | 2.3972 |
| 2.4054 | 64.0 | 9920 | 2.3955 |
| 2.3936 | 65.0 | 10075 | 2.3938 |
| 2.3936 | 66.0 | 10230 | 2.3922 |
| 2.3936 | 67.0 | 10385 | 2.3906 |
| 2.394 | 68.0 | 10540 | 2.3891 |
| 2.394 | 69.0 | 10695 | 2.3877 |
| 2.394 | 70.0 | 10850 | 2.3863 |
| 2.387 | 71.0 | 11005 | 2.3850 |
| 2.387 | 72.0 | 11160 | 2.3837 |
| 2.387 | 73.0 | 11315 | 2.3824 |
| 2.387 | 74.0 | 11470 | 2.3813 |
| 2.3812 | 75.0 | 11625 | 2.3801 |
| 2.3812 | 76.0 | 11780 | 2.3791 |
| 2.3812 | 77.0 | 11935 | 2.3780 |
| 2.3812 | 78.0 | 12090 | 2.3771 |
| 2.3812 | 79.0 | 12245 | 2.3762 |
| 2.3812 | 80.0 | 12400 | 2.3753 |
| 2.3802 | 81.0 | 12555 | 2.3745 |
| 2.3802 | 82.0 | 12710 | 2.3737 |
| 2.3802 | 83.0 | 12865 | 2.3730 |
| 2.3687 | 84.0 | 13020 | 2.3723 |
| 2.3687 | 85.0 | 13175 | 2.3717 |
| 2.3687 | 86.0 | 13330 | 2.3711 |
| 2.3687 | 87.0 | 13485 | 2.3706 |
| 2.3722 | 88.0 | 13640 | 2.3702 |
| 2.3722 | 89.0 | 13795 | 2.3698 |
| 2.3722 | 90.0 | 13950 | 2.3694 |
| 2.3693 | 91.0 | 14105 | 2.3691 |
| 2.3693 | 92.0 | 14260 | 2.3688 |
| 2.3693 | 93.0 | 14415 | 2.3686 |
| 2.3654 | 94.0 | 14570 | 2.3684 |
| 2.3654 | 95.0 | 14725 | 2.3683 |
| 2.3654 | 96.0 | 14880 | 2.3682 |
| 2.372 | 97.0 | 15035 | 2.3682 |
| 2.372 | 98.0 | 15190 | 2.3681 |
| 2.372 | 99.0 | 15345 | 2.3681 |
| 2.3664 | 100.0 | 15500 | 2.3681 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
fine-tuned/arguana-c-256-24-gpt-4o-2024-05-13-994439 | fine-tuned | 2024-05-20T21:11:20Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Argumentation",
"Corpus",
"Research",
"Dataset",
"Quality",
"custom_code",
"en",
"dataset:fine-tuned/arguana-c-256-24-gpt-4o-2024-05-13-994439",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-20T21:11:05Z | ---
license: apache-2.0
datasets:
- fine-tuned/arguana-c-256-24-gpt-4o-2024-05-13-994439
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Argumentation
- Corpus
- Research
- Dataset
- Quality
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
academic research data retrieval
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/arguana-c-256-24-gpt-4o-2024-05-13-994439',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
beam-searchers/dpo-llama-lora-model | beam-searchers | 2024-05-20T21:08:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-19T20:13:04Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mergekit-community/TopEvolution | mergekit-community | 2024-05-20T21:01:09Z | 12 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:merge:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:mergekit-community/mergekit-slerp-ebgdloh",
"base_model:merge:mergekit-community/mergekit-slerp-ebgdloh",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T20:53:23Z | ---
base_model:
- NousResearch/Hermes-2-Pro-Mistral-7B
- mergekit-community/mergekit-slerp-ebgdloh
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
* [mergekit-community/mergekit-slerp-ebgdloh](https://huggingface.co/mergekit-community/mergekit-slerp-ebgdloh)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: mergekit-community/mergekit-slerp-ebgdloh
merge_method: slerp
base_model: mergekit-community/mergekit-slerp-ebgdloh
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
MrBlackSheep/CartoonBoobsMix_CBMXv10_Inpainting | MrBlackSheep | 2024-05-20T21:00:49Z | 13 | 0 | diffusers | [
"diffusers",
"checkpoint",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] | image-to-image | 2024-02-28T13:52:36Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: image-to-image
tags:
- checkpoint
---
A **inpaint model** for CBMX (Cartoon Boobs Mix) checkpoint.
- **Developed by:** MrBlackSheep
- **Model type:** Checkpoint **Inpaint model**
- **License:** creativeml-openrail-m
 |
matthieuzone/OSSAU-_IRATYbis | matthieuzone | 2024-05-20T20:58:56Z | 4 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T20:50:46Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/OSSAU-_IRATYbis
<Gallery />
## Model description
These are matthieuzone/OSSAU-_IRATYbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/OSSAU-_IRATYbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
OwOpeepeepoopoo/LittleJerry5 | OwOpeepeepoopoo | 2024-05-20T20:55:07Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T12:09:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rajiv-data-chef/outputs | rajiv-data-chef | 2024-05-20T20:54:51Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llm-finetuned elsa entity-level sentiment-analysis",
"generated_from_trainer",
"base_model:abacusai/Llama-3-Smaug-8B",
"base_model:adapter:abacusai/Llama-3-Smaug-8B",
"license:llama2",
"region:us"
] | null | 2024-05-20T20:52:04Z | ---
license: llama2
library_name: peft
tags:
- llm-finetuned elsa entity-level sentiment-analysis
- generated_from_trainer
base_model: abacusai/Llama-3-Smaug-8B
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
maneln/tinyllamaconvo | maneln | 2024-05-20T20:54:45Z | 132 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T19:57:50Z | ---
license: apache-2.0
---
|
LoneStriker/Yi-1.5-34B-32K-8.0bpw-h8-exl2 | LoneStriker | 2024-05-20T20:49:49Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T20:37:34Z | ---
license: apache-2.0
---
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
kadirnar/sdxl-vton-trainv1 | kadirnar | 2024-05-20T20:44:25Z | 8 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"base_model:SG161222/RealVisXL_V4.0",
"base_model:finetune:SG161222/RealVisXL_V4.0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-05-20T17:52:28Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers-training
- diffusers
base_model: SG161222/RealVisXL_V4.0
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - kadirnar/sdxl-vton-trainv1
This pipeline was finetuned from **SG161222/RealVisXL_V4.0** on the **TryOnVirtual/VITON-HD-Captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a photo of a model wearing:




Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
SafeVLLMs/duplicated_gpr1200-pruned | SafeVLLMs | 2024-05-20T20:42:55Z | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-20T20:25:14Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seregadgl101/baii_v12_14ep | seregadgl101 | 2024-05-20T20:42:33Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-20T20:40:33Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# seregadgl101/baii_v12_14ep
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('seregadgl101/baii_v12_14ep')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=seregadgl101/baii_v12_14ep)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
NAMJOON/kisa-fine-tuned2 | NAMJOON | 2024-05-20T20:39:36Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-20T20:28:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LoneStriker/Yi-1.5-34B-32K-6.0bpw-h6-exl2 | LoneStriker | 2024-05-20T20:37:30Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T20:26:37Z | ---
license: apache-2.0
---
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
alexx1/llama3-omegle-lora-r128-gguf | alexx1 | 2024-05-20T20:34:52Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T20:30:53Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** alexx1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
OpenLLM-Ro/RoLlama2-7b-Instruct-GPTQ-4Bits-wikitext2 | OpenLLM-Ro | 2024-05-20T20:34:30Z | 88 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-05-20T20:26:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
GPTQ 4Bit quantised version of RoLlama2-7b-Instruct
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
YYYYYYibo/nash_simple_online_iter_2 | YYYYYYibo | 2024-05-20T20:28:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:updated",
"dataset:original",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:adapter:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T18:41:47Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
base_model: alignment-handbook/zephyr-7b-sft-full
datasets:
- updated
- original
model-index:
- name: nash_simple_online_iter_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nash_simple_online_iter_2
This model is a fine-tuned version of [YYYYYYibo/nash_simple_online_iter_1](https://huggingface.co/YYYYYYibo/nash_simple_online_iter_1) on the updated and the original datasets.
It achieves the following results on the evaluation set:
- Loss: 0.6782
- Rewards/chosen: -0.0686
- Rewards/rejected: -0.0971
- Rewards/accuracies: 0.6100
- Rewards/margins: 0.0284
- Logps/rejected: -268.4390
- Logps/chosen: -288.5429
- Logits/rejected: -2.5385
- Logits/chosen: -2.6271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6886 | 0.64 | 100 | 0.6782 | -0.0686 | -0.0971 | 0.6100 | 0.0284 | -268.4390 | -288.5429 | -2.5385 | -2.6271 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.3.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
BilalMuftuoglu/beit-base-patch16-224-85-fold2 | BilalMuftuoglu | 2024-05-20T20:28:16Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T20:07:39Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-85-fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9318181818181818
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-85-fold2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2763
- Accuracy: 0.9318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.6057 | 0.7273 |
| No log | 2.0 | 4 | 0.6639 | 0.7045 |
| No log | 3.0 | 6 | 0.7324 | 0.7045 |
| No log | 4.0 | 8 | 0.5213 | 0.7273 |
| 0.5701 | 5.0 | 10 | 0.4717 | 0.8182 |
| 0.5701 | 6.0 | 12 | 0.5339 | 0.7045 |
| 0.5701 | 7.0 | 14 | 0.4959 | 0.7273 |
| 0.5701 | 8.0 | 16 | 0.4086 | 0.8409 |
| 0.5701 | 9.0 | 18 | 0.4039 | 0.8182 |
| 0.4248 | 10.0 | 20 | 0.4106 | 0.8182 |
| 0.4248 | 11.0 | 22 | 0.4108 | 0.8409 |
| 0.4248 | 12.0 | 24 | 0.4607 | 0.7727 |
| 0.4248 | 13.0 | 26 | 0.4446 | 0.7727 |
| 0.4248 | 14.0 | 28 | 0.3912 | 0.8409 |
| 0.3579 | 15.0 | 30 | 0.5183 | 0.7727 |
| 0.3579 | 16.0 | 32 | 0.2991 | 0.8864 |
| 0.3579 | 17.0 | 34 | 0.3587 | 0.8182 |
| 0.3579 | 18.0 | 36 | 0.3110 | 0.8182 |
| 0.3579 | 19.0 | 38 | 0.3084 | 0.8636 |
| 0.2838 | 20.0 | 40 | 0.3079 | 0.8864 |
| 0.2838 | 21.0 | 42 | 0.3033 | 0.8409 |
| 0.2838 | 22.0 | 44 | 0.3126 | 0.8409 |
| 0.2838 | 23.0 | 46 | 0.3171 | 0.8864 |
| 0.2838 | 24.0 | 48 | 0.2689 | 0.8636 |
| 0.2705 | 25.0 | 50 | 0.3175 | 0.8409 |
| 0.2705 | 26.0 | 52 | 0.3464 | 0.8409 |
| 0.2705 | 27.0 | 54 | 0.3092 | 0.8636 |
| 0.2705 | 28.0 | 56 | 0.3178 | 0.8636 |
| 0.2705 | 29.0 | 58 | 0.4107 | 0.7955 |
| 0.1887 | 30.0 | 60 | 0.4151 | 0.8182 |
| 0.1887 | 31.0 | 62 | 0.5450 | 0.7955 |
| 0.1887 | 32.0 | 64 | 0.2892 | 0.8409 |
| 0.1887 | 33.0 | 66 | 0.4078 | 0.8409 |
| 0.1887 | 34.0 | 68 | 0.2821 | 0.8636 |
| 0.1692 | 35.0 | 70 | 0.2708 | 0.8636 |
| 0.1692 | 36.0 | 72 | 0.2692 | 0.8864 |
| 0.1692 | 37.0 | 74 | 0.2806 | 0.8864 |
| 0.1692 | 38.0 | 76 | 0.4613 | 0.8182 |
| 0.1692 | 39.0 | 78 | 0.2887 | 0.9091 |
| 0.1623 | 40.0 | 80 | 0.4046 | 0.8409 |
| 0.1623 | 41.0 | 82 | 0.4542 | 0.8409 |
| 0.1623 | 42.0 | 84 | 0.3010 | 0.8636 |
| 0.1623 | 43.0 | 86 | 0.2954 | 0.8636 |
| 0.1623 | 44.0 | 88 | 0.2838 | 0.8864 |
| 0.1522 | 45.0 | 90 | 0.2675 | 0.8864 |
| 0.1522 | 46.0 | 92 | 0.2517 | 0.9091 |
| 0.1522 | 47.0 | 94 | 0.2687 | 0.9091 |
| 0.1522 | 48.0 | 96 | 0.2551 | 0.9091 |
| 0.1522 | 49.0 | 98 | 0.2661 | 0.8864 |
| 0.1379 | 50.0 | 100 | 0.3507 | 0.8182 |
| 0.1379 | 51.0 | 102 | 0.2629 | 0.8864 |
| 0.1379 | 52.0 | 104 | 0.2697 | 0.8864 |
| 0.1379 | 53.0 | 106 | 0.3081 | 0.8636 |
| 0.1379 | 54.0 | 108 | 0.3851 | 0.8409 |
| 0.1283 | 55.0 | 110 | 0.3104 | 0.8636 |
| 0.1283 | 56.0 | 112 | 0.3624 | 0.8864 |
| 0.1283 | 57.0 | 114 | 0.3199 | 0.8864 |
| 0.1283 | 58.0 | 116 | 0.4964 | 0.8182 |
| 0.1283 | 59.0 | 118 | 0.3356 | 0.8864 |
| 0.1335 | 60.0 | 120 | 0.2314 | 0.9091 |
| 0.1335 | 61.0 | 122 | 0.2334 | 0.9091 |
| 0.1335 | 62.0 | 124 | 0.3961 | 0.8636 |
| 0.1335 | 63.0 | 126 | 0.3453 | 0.8636 |
| 0.1335 | 64.0 | 128 | 0.2806 | 0.8636 |
| 0.1353 | 65.0 | 130 | 0.3372 | 0.8636 |
| 0.1353 | 66.0 | 132 | 0.2675 | 0.8864 |
| 0.1353 | 67.0 | 134 | 0.3482 | 0.8864 |
| 0.1353 | 68.0 | 136 | 0.3725 | 0.8636 |
| 0.1353 | 69.0 | 138 | 0.3769 | 0.8636 |
| 0.099 | 70.0 | 140 | 0.5170 | 0.8409 |
| 0.099 | 71.0 | 142 | 0.4710 | 0.8636 |
| 0.099 | 72.0 | 144 | 0.3266 | 0.9091 |
| 0.099 | 73.0 | 146 | 0.3390 | 0.8636 |
| 0.099 | 74.0 | 148 | 0.3051 | 0.8636 |
| 0.1179 | 75.0 | 150 | 0.3030 | 0.9091 |
| 0.1179 | 76.0 | 152 | 0.3208 | 0.9091 |
| 0.1179 | 77.0 | 154 | 0.2954 | 0.9091 |
| 0.1179 | 78.0 | 156 | 0.2777 | 0.9091 |
| 0.1179 | 79.0 | 158 | 0.2763 | 0.9318 |
| 0.1077 | 80.0 | 160 | 0.3059 | 0.9091 |
| 0.1077 | 81.0 | 162 | 0.3445 | 0.8864 |
| 0.1077 | 82.0 | 164 | 0.3239 | 0.9091 |
| 0.1077 | 83.0 | 166 | 0.3175 | 0.9091 |
| 0.1077 | 84.0 | 168 | 0.3214 | 0.9091 |
| 0.0907 | 85.0 | 170 | 0.3313 | 0.9091 |
| 0.0907 | 86.0 | 172 | 0.3492 | 0.9091 |
| 0.0907 | 87.0 | 174 | 0.3644 | 0.9091 |
| 0.0907 | 88.0 | 176 | 0.3637 | 0.9091 |
| 0.0907 | 89.0 | 178 | 0.3750 | 0.9091 |
| 0.0972 | 90.0 | 180 | 0.3845 | 0.9091 |
| 0.0972 | 91.0 | 182 | 0.3749 | 0.9091 |
| 0.0972 | 92.0 | 184 | 0.3721 | 0.8864 |
| 0.0972 | 93.0 | 186 | 0.3680 | 0.8864 |
| 0.0972 | 94.0 | 188 | 0.3634 | 0.8864 |
| 0.0733 | 95.0 | 190 | 0.3565 | 0.9091 |
| 0.0733 | 96.0 | 192 | 0.3519 | 0.9091 |
| 0.0733 | 97.0 | 194 | 0.3529 | 0.9091 |
| 0.0733 | 98.0 | 196 | 0.3536 | 0.9091 |
| 0.0733 | 99.0 | 198 | 0.3561 | 0.9091 |
| 0.079 | 100.0 | 200 | 0.3565 | 0.9091 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
LoneStriker/Yi-1.5-34B-32K-5.0bpw-h6-exl2 | LoneStriker | 2024-05-20T20:26:34Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T20:17:26Z | ---
license: apache-2.0
---
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
0xBeaverT/remember-singer-2 | 0xBeaverT | 2024-05-20T20:25:58Z | 132 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T20:25:17Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
matthieuzone/MOTHAISbis | matthieuzone | 2024-05-20T20:25:21Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T20:17:11Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/MOTHAISbis
<Gallery />
## Model description
These are matthieuzone/MOTHAISbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/MOTHAISbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
seregadgl101/baii_v12_13ep | seregadgl101 | 2024-05-20T20:21:34Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-20T20:19:50Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# seregadgl101/baii_v12_13ep
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('seregadgl101/baii_v12_13ep')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=seregadgl101/baii_v12_13ep)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
alexx1/llama3-omegle-lora-r128-16bit | alexx1 | 2024-05-20T20:16:20Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T20:13:23Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** alexx1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
WesPro/RP-Llama-4x8B-MoE | WesPro | 2024-05-20T20:15:14Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-05T21:57:30Z | This is a my first Llama 3 MoE model with the following configs:
base_model: Llama-3-RPMerge-8B-SLERP
experts:
- source_model: Llama-3-RPMerge-8B-SLERP
- source_model: WesPro_Daring_Llama
- source_model: Chaos_RP_l3_8B
- source_model: llama-3-stinky-8B
It's meant for RP and does pretty well at it but I haven't tested it excessively yet. |
seregadgl101/baii_v12_12ep | seregadgl101 | 2024-05-20T20:15:00Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-20T20:12:33Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# seregadgl101/baii_v12_12ep
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('seregadgl101/baii_v12_12ep')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=seregadgl101/baii_v12_12ep)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
thirdai/NamedEntityRecognition | thirdai | 2024-05-20T20:11:17Z | 0 | 0 | null | [
"token-classification",
"region:us"
] | token-classification | 2024-05-20T19:58:14Z | ---
pipeline_tag: token-classification
--- |
cobrakenji/granite-20b-code-base-GGUF | cobrakenji | 2024-05-20T20:10:19Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"gpt_bigcode",
"text-generation",
"code",
"granite",
"dataset:codeparrot/github-code-clean",
"dataset:bigcode/starcoderdata",
"dataset:open-web-math/open-web-math",
"dataset:math-ai/StackMathQA",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T16:24:07Z | ---
pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
- codeparrot/github-code-clean
- bigcode/starcoderdata
# - Stackexchange
# - CommonCrawl
- open-web-math/open-web-math
- math-ai/StackMathQA
# - Arxiv
# - Wikipedia
# - conceptofmind/FLAN_2022 # Original link is broken, we used IBM's filtered version | Phase 2
metrics:
- code_eval
library_name: transformers
tags:
- code
- granite
model-index:
- name: granite-20b-code-base
results:
- task:
type: text-generation
dataset:
type: mbpp
name: MBPP
metrics:
- name: pass@1
type: pass@1
value: 43.8
veriefied: false
- task:
type: text-generation
dataset:
type: evalplus/mbppplus
name: MBPP+
metrics:
- name: pass@1
type: pass@1
value: 51.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Python)
metrics:
- name: pass@1
type: pass@1
value: 48.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 50.0
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Java)
metrics:
- name: pass@1
type: pass@1
value: 59.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Go)
metrics:
- name: pass@1
type: pass@1
value: 32.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(C++)
metrics:
- name: pass@1
type: pass@1
value: 40.9
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Rust)
metrics:
- name: pass@1
type: pass@1
value: 35.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Python)
metrics:
- name: pass@1
type: pass@1
value: 17.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 18.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Java)
metrics:
- name: pass@1
type: pass@1
value: 23.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Go)
metrics:
- name: pass@1
type: pass@1
value: 10.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(C++)
metrics:
- name: pass@1
type: pass@1
value: 25.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Rust)
metrics:
- name: pass@1
type: pass@1
value: 18.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Python)
metrics:
- name: pass@1
type: pass@1
value: 23.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 23.8
veriefied: false # Check
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Java)
metrics:
- name: pass@1
type: pass@1
value: 14.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Go)
metrics:
- name: pass@1
type: pass@1
value: 26.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(C++)
metrics:
- name: pass@1
type: pass@1
value: 15.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Rust)
metrics:
- name: pass@1
type: pass@1
value: 3.0
veriefied: false
---
### Description:
This is forked from IBM's [`granite-20b-code-base-GGUF`](https://huggingface.co/ibm-granite/granite-20b-code-base-GGUF) - commit [`d70433a71e2fb9e20f8bfca3ff2d8c15393f0e44`](https://huggingface.co/ibm-granite/granite-20b-code-base-GGUF/commit/d70433a71e2fb9e20f8bfca3ff2d8c15393f0e44).
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-20b-code-base) for more details.
## Use with llama.cpp
```shell
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
# install
make
# run generation
./main -m granite-20b-code-base-GGUF/granite-20b-code-base.Q4_K_M.gguf -n 128 -p "def generate_random(x: int):" --color
|
OwOpeepeepoopoo/DancingElaine5 | OwOpeepeepoopoo | 2024-05-20T20:09:42Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T12:09:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
matthieuzone/MONT_D_ORbis | matthieuzone | 2024-05-20T20:08:29Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T20:00:18Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/MONT_D_ORbis
<Gallery />
## Model description
These are matthieuzone/MONT_D_ORbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/MONT_D_ORbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_suba_s1_106s | nsugianto | 2024-05-20T20:07:54Z | 27 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"table-transformer",
"object-detection",
"generated_from_trainer",
"base_model:microsoft/table-transformer-structure-recognition",
"base_model:finetune:microsoft/table-transformer-structure-recognition",
"license:mit",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-05-20T14:45:38Z | ---
license: mit
base_model: microsoft/table-transformer-structure-recognition
tags:
- generated_from_trainer
model-index:
- name: tblstructrecog_finetuned_tbltransstrucrecog_v1_suba_s1_106s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_suba_s1_106s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
|
matthieuzone/MIMOLETTEbis | matthieuzone | 2024-05-20T20:00:03Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T19:51:54Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/MIMOLETTEbis
<Gallery />
## Model description
These are matthieuzone/MIMOLETTEbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/MIMOLETTEbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
matthieuzone/MAROILLESbis | matthieuzone | 2024-05-20T19:51:40Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T19:43:29Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/MAROILLESbis
<Gallery />
## Model description
These are matthieuzone/MAROILLESbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/MAROILLESbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
isaaclee/duration_mistral_train_run2 | isaaclee | 2024-05-20T19:43:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T17:39:41Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: duration_mistral_train_run2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# duration_mistral_train_run2
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 |
mserrasa/yolos_finetuned_VinBigData | mserrasa | 2024-05-20T19:37:34Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"yolos",
"object-detection",
"generated_from_trainer",
"base_model:mserrasa/yolos_finetuned_VinBigData",
"base_model:finetune:mserrasa/yolos_finetuned_VinBigData",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-05-12T19:23:43Z | ---
base_model: mserrasa/yolos_finetuned_VinBigData
tags:
- generated_from_trainer
model-index:
- name: yolos_finetuned_VinBigData
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yolos_finetuned_VinBigData
This model is a fine-tuned version of [mserrasa/yolos_finetuned_VinBigData](https://huggingface.co/mserrasa/yolos_finetuned_VinBigData) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
BilalMuftuoglu/beit-base-patch16-224-75-fold5 | BilalMuftuoglu | 2024-05-20T19:35:52Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T19:10:41Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-75-fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9534883720930233
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-75-fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2664
- Accuracy: 0.9535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.6862 | 0.5116 |
| No log | 2.0 | 4 | 0.5913 | 0.7209 |
| No log | 3.0 | 6 | 0.7204 | 0.6977 |
| No log | 4.0 | 8 | 0.5995 | 0.6977 |
| 0.6162 | 5.0 | 10 | 0.4235 | 0.8140 |
| 0.6162 | 6.0 | 12 | 0.3975 | 0.8140 |
| 0.6162 | 7.0 | 14 | 0.6029 | 0.7674 |
| 0.6162 | 8.0 | 16 | 0.4670 | 0.8140 |
| 0.6162 | 9.0 | 18 | 0.3448 | 0.8372 |
| 0.4312 | 10.0 | 20 | 0.4464 | 0.8372 |
| 0.4312 | 11.0 | 22 | 0.3396 | 0.8605 |
| 0.4312 | 12.0 | 24 | 0.4007 | 0.8372 |
| 0.4312 | 13.0 | 26 | 0.3398 | 0.8140 |
| 0.4312 | 14.0 | 28 | 0.4276 | 0.8605 |
| 0.3453 | 15.0 | 30 | 0.4336 | 0.8605 |
| 0.3453 | 16.0 | 32 | 0.3777 | 0.8140 |
| 0.3453 | 17.0 | 34 | 0.5910 | 0.8140 |
| 0.3453 | 18.0 | 36 | 0.6095 | 0.8140 |
| 0.3453 | 19.0 | 38 | 0.3570 | 0.8140 |
| 0.3288 | 20.0 | 40 | 0.5202 | 0.8140 |
| 0.3288 | 21.0 | 42 | 0.5604 | 0.8140 |
| 0.3288 | 22.0 | 44 | 0.2949 | 0.8372 |
| 0.3288 | 23.0 | 46 | 0.3442 | 0.8837 |
| 0.3288 | 24.0 | 48 | 0.2820 | 0.8372 |
| 0.2571 | 25.0 | 50 | 0.3240 | 0.8605 |
| 0.2571 | 26.0 | 52 | 0.2909 | 0.8837 |
| 0.2571 | 27.0 | 54 | 0.2429 | 0.8837 |
| 0.2571 | 28.0 | 56 | 0.2280 | 0.9302 |
| 0.2571 | 29.0 | 58 | 0.3984 | 0.8605 |
| 0.2012 | 30.0 | 60 | 0.2905 | 0.8605 |
| 0.2012 | 31.0 | 62 | 0.2509 | 0.9070 |
| 0.2012 | 32.0 | 64 | 0.2888 | 0.8605 |
| 0.2012 | 33.0 | 66 | 0.2689 | 0.8605 |
| 0.2012 | 34.0 | 68 | 0.2417 | 0.8837 |
| 0.1814 | 35.0 | 70 | 0.2418 | 0.9070 |
| 0.1814 | 36.0 | 72 | 0.2491 | 0.9070 |
| 0.1814 | 37.0 | 74 | 0.2998 | 0.9070 |
| 0.1814 | 38.0 | 76 | 0.2744 | 0.9302 |
| 0.1814 | 39.0 | 78 | 0.2664 | 0.9535 |
| 0.1555 | 40.0 | 80 | 0.2160 | 0.9302 |
| 0.1555 | 41.0 | 82 | 0.3875 | 0.9070 |
| 0.1555 | 42.0 | 84 | 0.4608 | 0.9070 |
| 0.1555 | 43.0 | 86 | 0.2978 | 0.9302 |
| 0.1555 | 44.0 | 88 | 0.4461 | 0.8837 |
| 0.1459 | 45.0 | 90 | 0.3603 | 0.9070 |
| 0.1459 | 46.0 | 92 | 0.2973 | 0.9302 |
| 0.1459 | 47.0 | 94 | 0.3385 | 0.8837 |
| 0.1459 | 48.0 | 96 | 0.3239 | 0.8837 |
| 0.1459 | 49.0 | 98 | 0.4315 | 0.8837 |
| 0.1372 | 50.0 | 100 | 0.3519 | 0.8837 |
| 0.1372 | 51.0 | 102 | 0.4148 | 0.8837 |
| 0.1372 | 52.0 | 104 | 0.4687 | 0.8837 |
| 0.1372 | 53.0 | 106 | 0.3287 | 0.8837 |
| 0.1372 | 54.0 | 108 | 0.3194 | 0.9070 |
| 0.1049 | 55.0 | 110 | 0.3703 | 0.8837 |
| 0.1049 | 56.0 | 112 | 0.3522 | 0.9070 |
| 0.1049 | 57.0 | 114 | 0.2572 | 0.9070 |
| 0.1049 | 58.0 | 116 | 0.2523 | 0.9070 |
| 0.1049 | 59.0 | 118 | 0.3136 | 0.9070 |
| 0.1143 | 60.0 | 120 | 0.3638 | 0.9070 |
| 0.1143 | 61.0 | 122 | 0.2916 | 0.9535 |
| 0.1143 | 62.0 | 124 | 0.2521 | 0.9302 |
| 0.1143 | 63.0 | 126 | 0.2735 | 0.9302 |
| 0.1143 | 64.0 | 128 | 0.3112 | 0.9302 |
| 0.0885 | 65.0 | 130 | 0.3246 | 0.9302 |
| 0.0885 | 66.0 | 132 | 0.3264 | 0.9070 |
| 0.0885 | 67.0 | 134 | 0.3351 | 0.9302 |
| 0.0885 | 68.0 | 136 | 0.3455 | 0.9302 |
| 0.0885 | 69.0 | 138 | 0.3579 | 0.9302 |
| 0.1064 | 70.0 | 140 | 0.3926 | 0.9302 |
| 0.1064 | 71.0 | 142 | 0.4370 | 0.9070 |
| 0.1064 | 72.0 | 144 | 0.4149 | 0.9302 |
| 0.1064 | 73.0 | 146 | 0.3315 | 0.9535 |
| 0.1064 | 74.0 | 148 | 0.2704 | 0.9302 |
| 0.1047 | 75.0 | 150 | 0.2600 | 0.9302 |
| 0.1047 | 76.0 | 152 | 0.3215 | 0.9535 |
| 0.1047 | 77.0 | 154 | 0.4110 | 0.9302 |
| 0.1047 | 78.0 | 156 | 0.4414 | 0.8837 |
| 0.1047 | 79.0 | 158 | 0.3589 | 0.9302 |
| 0.0937 | 80.0 | 160 | 0.3085 | 0.9535 |
| 0.0937 | 81.0 | 162 | 0.2889 | 0.9535 |
| 0.0937 | 82.0 | 164 | 0.2787 | 0.9535 |
| 0.0937 | 83.0 | 166 | 0.3251 | 0.9535 |
| 0.0937 | 84.0 | 168 | 0.4483 | 0.9070 |
| 0.0748 | 85.0 | 170 | 0.5490 | 0.8605 |
| 0.0748 | 86.0 | 172 | 0.5422 | 0.8605 |
| 0.0748 | 87.0 | 174 | 0.5282 | 0.8837 |
| 0.0748 | 88.0 | 176 | 0.5733 | 0.8605 |
| 0.0748 | 89.0 | 178 | 0.5978 | 0.8605 |
| 0.0834 | 90.0 | 180 | 0.5763 | 0.8605 |
| 0.0834 | 91.0 | 182 | 0.5270 | 0.8605 |
| 0.0834 | 92.0 | 184 | 0.4946 | 0.8837 |
| 0.0834 | 93.0 | 186 | 0.4881 | 0.9070 |
| 0.0834 | 94.0 | 188 | 0.5115 | 0.8605 |
| 0.1016 | 95.0 | 190 | 0.5445 | 0.8605 |
| 0.1016 | 96.0 | 192 | 0.5537 | 0.8605 |
| 0.1016 | 97.0 | 194 | 0.5451 | 0.8605 |
| 0.1016 | 98.0 | 196 | 0.5323 | 0.8605 |
| 0.1016 | 99.0 | 198 | 0.5190 | 0.8837 |
| 0.0657 | 100.0 | 200 | 0.5155 | 0.8837 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
matthieuzone/FROMAGE_FRAISbis | matthieuzone | 2024-05-20T19:34:45Z | 3 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T19:26:37Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/FROMAGE_FRAISbis
<Gallery />
## Model description
These are matthieuzone/FROMAGE_FRAISbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/FROMAGE_FRAISbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
rdsmaia/my_awesome_mind_model | rdsmaia | 2024-05-20T19:29:47Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-02-17T21:09:04Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.07964601769911504
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6607
- Accuracy: 0.0796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6546 | 0.0708 |
| No log | 1.8667 | 7 | 2.6484 | 0.0796 |
| 2.5954 | 2.9333 | 11 | 2.6503 | 0.0619 |
| 2.5954 | 4.0 | 15 | 2.6522 | 0.0619 |
| 2.5954 | 4.8 | 18 | 2.6549 | 0.0796 |
| 2.5798 | 5.8667 | 22 | 2.6577 | 0.0796 |
| 2.5798 | 6.9333 | 26 | 2.6597 | 0.0796 |
| 2.57 | 8.0 | 30 | 2.6607 | 0.0796 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
bhaskars113/toyota-paint-attribute-1.2 | bhaskars113 | 2024-05-20T19:28:56Z | 7 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"region:us"
] | text-classification | 2024-05-20T19:28:24Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
base_model: sentence-transformers/paraphrase-mpnet-base-v2
metrics:
- accuracy
widget:
- text: I think it sounds pretty good, especially for a pic disc! Sounds on par with
rose pink Cadillac and probably better than smooth big cat. My one issue is I
have a few skips in the first song....but I'm using my backup scratch needle right
now so I'm not sure if it's actually the record The sea glass looks super cool
too, cheers!
- text: Nice. Were the chrome strips on the power assist steps wrapped or painted?
Thinking of dechroming mine and thinking the vinyl will get scuffed off pretty
quickly.
- text: Oh and consider yourself blessed you got meteorite, I have sonic and swirl
marks and scratches are so easily seen, with grey it hides much better
- text: https://preview.redd.it/by2gzb77m2wa1.jpeg?width=1284&format=pjpg&auto=webp&s=6d38c244f6a82b6af4b4eebe91c59f60536f289e
Under the light the paint looks terrible but outside of that, the car is sooo
clean. Wish I could add more than one pic. The interior and everything mechanical
is just amazingly clean.
- text: Not true. Once oxidation has begun there’s no stopping it you can minimize
the oxidation of the affected area by coating it but you can’t stop it
pipeline_tag: text-classification
inference: true
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 36 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 19 | <ul><li>'I usually come on to give a hard time to ‘frame rust’ posts. But damn. This thing must have dedicated parking spot, in the ocean. I never expected it to be that bad. But in saying that the truck will be fine for 20-30 years.'</li><li>'Laser cut or etched with some sort of acid? Probably not just the bumper. Take a look at the frame and suspension. Make sure its not rusting. And if it is maybe you should get a frame coating of sorts'</li><li>'This level of rust is common from my experience, these frames are coated with a black wax like material instead of paint. Eventually this wax material wears off and then corrosion starts. It might look ugly but as long as there are no holes or cracks the frame is fine. If you want to make it look better visually you can spray oil on the frame (like chainsaw chain lube, fluid film etc)'</li></ul> |
| 13 | <ul><li>'When I want to get it cleaned quickly I do the touch less car wash there isn’t many so you’ll have to find one that does it. Also gotta be careful what kind of cleaning chemicals they use cause some products can damage the car even touch less car wash and the pressure it uses. Now if I want to take my time and makes sure it’s done right then I have my own cleaning set up for car washing it. I use Adam polishes stuff from detail to shampoo etc. I also got the Adam polishes pressure washer and the Adam polished towels to dry it up. I recommend watching a lot of videos on YouTube there’s plenty of information of what to do and not to do if you decide to clean it yourself cause you don’t wanna mess it up and cost you more to do a paint correction. Hopefully this helps? ??'</li><li>'If you find one that you like, I’ve found that installing a little blue painters tape under the rack mounting points keeps the hardware from marking up the paint. Make sure it is the blue kind, it comes off much easier when you go to remove it.'</li><li>'Washing the car by hand has always felt like the only option to me. A good electric power washer with the right chemicals makes it easy and wax keeps the dirt off for a good amount of time.'</li></ul> |
| 18 | <ul><li>'Love the color'</li><li>'Ufff she’s gleaming like a ?? stunning ??'</li><li>'The spray paint is cool (and encouraged), but I do wish I could have seen them freshly planted and shiny as well. ... I still say we do this when we retire the current racecars ;) #SubaruRanch \n \n #CadillacRanch #Amarillo\n #TX #automotive #rt66 #America #americanhistory\n #travel #adventure #subaru'</li></ul> |
| 20 | <ul><li>'Yeah I don’t want to be putting plastidip or something on a brand new truck. This would be a lot but what about getting them painted gloss black to match the grille on the inside and and have the chrome be matte black similar to some of the trim'</li><li>'**Foose** **F104 LEGEND** **Gloss Black Milled, i think would look great, and are only about 270.00 for Foose good price**'</li><li>'Nice, love the matte paint/wrap!'</li></ul> |
| 15 | <ul><li>'True are you taking New customer I have a 1937 buick no dent little surface rust'</li><li>"Steelbrush, steel wool, clean with vinegar solution and finish with flex seal. Stops water and oxygen. No oxygen means no oxidation which means no rust. Back in the day they would save rust and add it to their paint or whitewash which gave the barn it's distinctive color which is often imitated but rarely duplicated."</li></ul> |
| 22 | <ul><li>'I’ve got a 2021 Gretsch g6128t-57 duo jet in Cadillac green. The guitar is in excellent condition with some minor scratches and swirls too hard to photograph. Got it from a fellow forum brother about 5 months ago for my first foray into Gretsch guitars.'</li><li>'Hopefully you don’t find too many surprises when you take the paint off. Yeah, it’s hard to tell condition from pictures. Good luck with your project.'</li><li>"$8,999 but just test drove it and I don't think im going to get it because it looked a lot better in the pictures. Has been kept dirty on the lot and there are a lot of swirl marks and it's just not in the best condition as I hoped. Some things are wrong with the interior too. The screen In the dash has some weird spots and there's obviously some electrical issues because the interior lights don't turn off lol"</li></ul> |
| 2 | <ul><li>'I had a similar situation with a black GMC truck and an APC. The roof and top edge of the hood had visible oxidation setting in. '</li><li>'Finally got it detailed. Car looked great from a distance, but up close had lot of oxidation from sitting up for a while. '</li><li>'Have this on my charger hood, small oxidation spot, I was told a new hood is the best option (OEM) does getting it repainted actually work? It was a very reputable non dealer body shop'</li></ul> |
| 21 | <ul><li>"Click for more info and reviews of this Malone Watersport Carriers:\n \n https://www.etrailer.com/Watersport-Carriers/Malone/MPG107MD.html\n \n Check out some similar Watersport Carriers options:\n \n https://www.etrailer.com/dept-pg-Watersport_Carriers-sf-Kayak.aspx\n \n \n \n Search for other popular Chevrolet Equinox parts and accessories:\n \n https://www.etrailer.com/vehicle/2020/Chevrolet/Equinox\n \n \n \n https://www.etrailer.com\n \n Don’t forget to subscribe! https://www.youtube.com/user/etrailertv\n \n \n \n Full transcript: https://www.etrailer.com/tv-install-malone-seawing-kayak-carrier-2020-chevrolet-equinox-mpg107md.aspx\n \n Hey everyone, Charles here at etrailer. And today we're taking a look at the Malone SeaWing Kayak Carrier on the 2020 Chevrolet Equinox. These are gonna be your saddle style kayak carrier. So they're gonna be great for your extra long or your extra wide kayaks that don't fit in a J-style carrier. On our 54 inch crossbars, we still have plenty of room for another set of these for another kayak or even a bike or a cargo basket. These are gonna be made out of a polycarbonate material. They're gonna be very durable and corrosion resistant. They come with the nylon straps as well as the bow and stern tie-downs. And these are gonna fit most of your factory cross bars. We have these on the arrow style and they fit nicely on those, but as well as your square and your round bars as well. And on the inside of the saddle, there is a nice, it's like a thick rubber with grooves for added traction and protection for your kayak. Weight capacity is gonna handle your kayaks of up to 75 pounds. And one thing that I really like about this is that you don't need any tools to install it, So it's very quick to install and uninstall. Just make sure that you have plenty of clearance right here for your crossbar since you do have to twist the knobs underneath your crossbars. Your saddle style kayak carriers are gonna give you extra clearance compared to your J-style kayak carriers. So that's gonna be beneficial for you if you have a larger vehicle like Yukon so that you can park into your garage or go through a drive-through or anything like that. So overall, these are a very solid and durable build, that's gonna last you a long time and it's gonna be perfect for you if you have your extra long or extra wide kayaks. To begin the installation, we have everything laid out that we're gonna use. Malone does provide us with two sets of bolts. A small and a large. We are using the large for our arrow style crossbars that we have. We installed the front carry already but we'll do the rear together. So I'm just gonna stick the bolts through. And then this padded bar, the groove is going to face this way and we're just gonna loosely, loosely screw this guy on. If I get to the same spot, I like to squeeze them in at the same time so that we get an even distribution and so that the carrier isn't lopsided when we tighten them down. All right, so we have the red strap here. We are going to go up through the top. I mean, I guess down to the top and then up through the closer slot here. And we are just gonna set these off to the side until we load a kayak on, then we can just throw it over. This is your rear loading kayak carrier. So if you didn't have the Malone C suction cup attachment that you purchased separately, you can always just lay a thicker pad over here, that way you're not scratching your vehicle. But these are actually close enough to the edge of our arrow bars. And I'm tall enough to just slide it on through the side here. It's not the best way but it gets it done. So now I'm just gonna wrap our straps across the top. Make sure that these are flat. Making sure that this leather part on the buckle is facing our kayak to avoid any metal on our kayak here. And then we're just gonna do the same thing on this side. Going down and then up. Pull that through. Through the leather strap and then up through our buckle here. And then we can just roll up the straps, clean them up to get 'em out of the way. So Malone provides us with the bow and certain tie down straps. They are gonna be a stainless steel S hook. And if you didn't have a hood anchor or anything like that, you can always pick one up here at etrailer. Today, we are using a padded dog bone from our etrailer kayak carrier tie down strap kit. It just makes it a lot easier and we don't have to have any metal on frame contact. So we have it where we want it. So now we can just close the hood and attach our hook right here. So once we have our S hook hooked into our strap right here, we're just gonna pull tight. And then maybe around 15 inches or so, we're gonna make a loop, and then another loop. We're gonna wrap that around and then go back through in the middle. And then we are gonna take our free end over here and slide that through the back. Pulling on this side tight and then pulling down on my left. I'm just gonna go ahead and tie it off."</li><li>'Thank you. The spray paint is holding up well.'</li><li>'My C8 is black. I can say after 8 months, PPF holding up really good: knock on wood!'</li></ul> |
| 8 | <ul><li>'your paint looks great, is it original? Looks super smooth'</li><li>'Personally I prefer black mirror and everything else body color. That paint looks smoooth tho.'</li><li>"I used Adam's advanced graphene ceramic coat. It's billed as 10H and 9+ year durability. The kit was $140 on Amazon and i barely used a quarter of it. The paint feels smooth like glass. It's crazy."</li></ul> |
| 0 | <ul><li>'VW Tiguan has been massacred by deep scratches, I have been experimenting with different pad combinations trying to remove the deep scratches.'</li><li>'Black cars in general don’t hold their value as well as other colors due to black paint showing scratches and swirls extremely easily. These are not investment vehicles, at the end of the day they are economy Chrysler products made with very little concern for quality control.'</li><li>"I'm not sure why they went Gloss black on the GT front but matte on the EB. Glossy sucks to clean and scratches easily."</li></ul> |
| 27 | <ul><li>"First, it looks like your factory screen protector is still on the infotainment and I need to peel it off... Second, piano black interiors attract/show so much dust that you'd swear dust was manufactured by the trim. I keep a very small, tiny version of a California Duster in my car to tidy bits up. One end is like the duster, the other end is like a fat paint brush to get into crevices."</li><li>'The reason we charge high dollar ppf prices is because we budget throw away materials for contamination like that. We run dust extraction machines like crazy in our install room. We also disassemble everything we can within reason we did a Hummer H2 custom bulk install on every panel and charged $15,000. Even at that price, we don’t look forward to doing another one. Customer was thrilled though.'</li><li>'Natural wax is actually oil like and attracts dust and dirt particles (best shine tho) Synthetic sealants / hybrid wax or ceramic / graphene will repel elements.'</li></ul> |
| 9 | <ul><li>'haha yea black is my color, esp with a glossy coat, love the look of shiny black & the vibrant red tail lights! the red is pretty cool too, i like the shade they have. I feel not all cars can pull off such a vibrant red!'</li><li>'Hi All! The Bronco is ready for paint, and I’m torn between these three options. You guys have seen everything and I value your opinion from the heart. Was gonna do candy red but decided it’s too loud for this truck. Matte Graphite - looks insane irl and I think will pop with chrome and black accents. Every manufacturer is coming out with “special” satin/matte paints, and it’s super popular. I’ve always believed the bronco should be a nice glossy paint, and I’m truly happy with either option one or two. But then option three came in to derail my thought process. What do you find folks think would be a logical choice both in terms of aesthetic and maintenance?'</li><li>'Nice look. The car is mirror like with that glossy finish. Wonderful looking Challenger'</li></ul> |
| 3 | <ul><li>'I have a 22 Taos S 4motion. 40k miles. Pluses are that there car is reasonably comfortable and drives well. Decent mileage (28.6). With factory tires it’s been great in the snow. Plenty of room. Downsides - Paint is cracking (see previous post) and they will not cover (warranty on paint ends at 36k).'</li><li>'Paint formulas changed drastically in 2007. They went from an oil base paint to water based, due to EPA regulations. Smaller rock chips, paint cracking, easier to scratch, etc. have gone up ever since.'</li><li>'Thanks! It cost about $300 total. Same reason we repainted it… had too much cracked paint on the hood.'</li></ul> |
| 12 | <ul><li>"The driver's & front sides of the 235 stovebolt 6 engine has been painted blue as in a '56 Chevrolet blue flame engine out of a car (see pics before & after). It happens to be the identical type engine, i.e., from a Chevrolet car & born in 1956 that is in my other antique pick-up. I want the engine looking nice before it's dropped into the '52 Chevrolet Suburban Carryall."</li><li>'Southeast Alabama . That is one awesome looking 90 model truck. I have a 1994 Chevrolet Silverado extended cab that I am and the shop and paint shop are trying kind of go back to factory or as close as we can . It is by no means a 1000 Horse power . Just the plain old 350- 5.7 Throttle body.'</li><li>'I recently bought a 1987 Cadillac Brougham. Mechanically, it is in impeccable shape with only 41,000 original miles on the clock. The only issues are the bumper fillers (that I have replacements for), a few minor dings, and the paint.'</li></ul> |
| 26 | <ul><li>'I’ve got a 2017 diesel Colorado and am happy with it stock emissions 147,000. One thing I learned: DOC is a wear part in addition to DPF. (There is no code for a bad oxidation catalyst just P200C high exhaust temps. I haven’t made master post about it on Coloradofans yet.) Anyway I’m happy but lots of information on chat rooms is confused and not always corrected by others who know. I like the Z71 better but do you ! !'</li><li>"Most recently used that trick on my cousin's 07 infiniti g37 and it blew his mind lol. As always follow up with your LSP of choice. I tried it on my old cadillac cts and it didn't even make a dent in the oxidation, had to break out the sand paper for that one. Thanks, looks like different plastic materials react differently. I wish we still have the good, old glass headlights."</li><li>'Oxidation of the metal. It’s not shiny. Nor the cracks'</li></ul> |
| 7 | <ul><li>'I’m not a big fan of the old F-150, but that paint is sharp, and I love that blue color.'</li><li>'Wow, cool . I am the 3rd owner of a 2001 Ford Ranger Edge pickup, bright island blue metallic paint 131k miles and original paint. The truck still has 80% of its factory installed parts on it today. Not bad for a Maine vehicle.'</li><li>'White, however, is the best and most long lasting color. I bought a 2005 Cadillac Deville (white), and the paint looks new. I have had other examples, the white is the most long lasting color as it does not absorb heat.'</li></ul> |
| 14 | <ul><li>'I was eager to see what they were going to be but as a detail (hobby) guy, even for a garage go queen that paint is a no go. Really think this is a collector money grab. $15k is just too much for paint that is 8k on a Cadillac If you want it, GO FOR IT, but if you don\'t have the whole car wrapped in PPF, road debris is really going to do a number on it if that 15k doesn\'t include some extra mils and durability additive of paint. Not sure it will have a "clear" to polish out imperfections?'</li><li>'That makes no sense and looks horrible. It may be painted but may also be removable. If someone just hand laid it on top of the clear coat, it may be able to be removed. The easy answer is just put a black vinyl stripe over it and forget it ever was there'</li><li>'Originally Posted by SnakeEyeSS (Post 11325666) I was eager to see what they were going to be but as a detail (hobby) guy, even for a garage go queen that paint is a no go. Really think this is a collector money grab. $15k is just too much for paint that is 8k on a Cadillac If you want it, GO FOR IT, but if you don\'t have the whole car wrapped in PPF, road debris is really going to do a number on it if that 15k doesn\'t include some extra mils and durability additive of paint. Not sure it will have a "clear" to polish out imperfections?'</li></ul> |
| 5 | <ul><li>'Both will be tough to keep clean. The gray will be more forgiving when it gets some swirls though. Only way I’d have a black car is if I had a garage to keep it in and it wasn’t my daily lol.'</li><li>'Even when it’s dirty I find the phantom a bit more “forgiving” vs jet black/plain black. Hides it’s age a bit more too since the color is busier vs a straightforward, unforgiving black paint.'</li><li>"I've owned my share of black vehicles and I am too OCD to own them without spending an inordinate amount of time taking care of them. I'm a white, silver and maybe gunmetal grey guy now just because of the maintenance."</li></ul> |
| 16 | <ul><li>'Color match. Now do the mirror caps and the door handles. If you decide to do the “bump strip on the doors, replace them. Don’t paint them. The paint doesn’t stick as well as you’d like in the long run on the plastic chrome.'</li><li>"2014 chevy equinox. There is a very slight shake at highway speed (75mph+) but when I hit the brakes my car turns into the paint mixer at home depot. I haven't noticed it with city driving, only highway."</li><li>"Personally, I've never worked on an Escalade but I've been around Cadillacs for a while. I was taught never to buy an old used Cadillac because of their engineering. If you want to take apart one thing, be prepared to take out everything. Parts are expensive, aluminum cracks and warps. In general I found Cadillacs to be engineering boobie traps with lots of spots to rip your arm open and scratch your hands. I guess that's just my opinion. You seem to like the challenges and I respect you for it."</li></ul> |
| 24 | <ul><li>"MY HD is stock, so no loud pipes. It shakes like a Home Depot paint mixer at idle, but silky smooth on the move. You'll love the Goldwing, just be careful in thinking that you're going from a Ford to a Cadillac in the comfort department."</li><li>'Thoughts on leather conditioners - Apple Leather: puff up quilting about 2-3 applications but take care because it builds up shine, and to lube up stiff leather chain straps ?? - Saphir: gives life to dry grained leather and buffs out scratches - Cadillac: soft smooth leathers like lambskin and an all around mvp safe bet for all types of leather - Leather Honey: gives life to shoes, but ends tragically if used on grained leathers What are your thoughts?? Feel free to disagree / disprove the above!'</li><li>'Same. Smooth that corner, apply touch up paint, call it a day. No one will see it'</li></ul> |
| 1 | <ul><li>'Has the underground color and was commenting the other day about bad paint from the factory. I thought he was crazy until I went to look at one. Sure enough I went to the dealership and the black one I saw looked like it had already been through the car wash several times.'</li><li>'There are known paint defects with Hyundai-Kia white paint. Assuming this is factory paint, you should contact Kia and push them to fix it'</li><li>'My coworker has had his Fit painted 3 times due to shitty factory paint. It would all flake off near the window.'</li></ul> |
| 11 | <ul><li>'Progressive, I had to fight for every dollar. They wanted to take $150 off the cash value they were paying for a tiny scuff mark in the interior plastic in the trunk area of the car, since it was a “pre-accident damageâ€_x009d_ which was total bullshit.'</li><li>'The Bronco Raptor is just as exotic or rare as your base model corvette. Not even a c8r. Calm down you didn’t scuff your shoes.'</li><li>'They managed to get to her, and she suffered no serious injuries, save that her leg was scuffed pretty badly (blue and flathead catfish have no actual teeth, just a rough inner lip like sandpaper), but the experience was very traumatizing and made several newspapers and local TV news broadcasts (allegedly... I never saw this myself, despite my attempts in the past to find evidence for it). I\'ve also spoken personally to several people who have claimed to do underwater work for the lakes in scuba gear (not sure what it is, save that it\'s got something to do with dam maintenance), and they have told me personally that they have seen catfish nesting at the foot of some dams that are "...the size of Buicks." Make of that how you will. There is also, of course, the ever-present rumor of the freshwater octopus in Oklahoma, but...can\'t say I have any experience with that one.'</li></ul> |
| 6 | <ul><li>"Tesla's paint quality isn't the best but if you've ever owned a Honda then you know the pain. Somehow Hyundai is one of the few car companies to figure out how to make really durable paint."</li><li>'Hyundai puts a second coat of white paint on the car to make it more durable, hence the extra cost.'</li><li>"German cars and luxury cars in general will have significantly more durable paint. Honda on their speciality cars (e.g. CTR, NSX) will use harder paint. Aluminum bodied F series trucks and Audi’s usually have pretty solid paint. Mazda's speciality colors and Acura's $6k paint jobs are top notch"</li></ul> |
| 4 | <ul><li>' I practiced on a 1990 Honda Accord that had neglected rough paint and had been sitting for 10 years. '</li><li>'Probably done in shipping. It’s more hassle to get fixed than touch up paint. Had a scrape on my vehicle skirt. The paint issues that are worrisome are factory and usually take some time to appear as ripples, lines, or premature fading.'</li><li>'Hey to all that have a hummer ev. I just took delivery and parked in garage. I noticed in the garage when light hits the right angle the reflection ripples like something is there.'</li></ul> |
| 28 | <ul><li>'My 2018 was peeling when I got it brand new. Instead of having Chevy get me a new one, that would fail again I took some acetone to it to remove the remaining red. Then I bought sharpie paint pens and colored them yellow to match my calipers. Five years later the yellow is still perfect.'</li></ul> |
| 32 | <ul><li>'Hey a little unrelated, but in my C8 (3LT) my leather dash is bubbling and delaminating. My dealer is taking care of it but why is this still an issue with the 3LTs even after being an issue for years on the C7? Is the glue they use different for the different leathers or something?'</li></ul> |
| 25 | <ul><li>'I found a hummer EV with orange paint on the whole body like that!'</li><li>'Oooo paint some engine block orange!'</li><li>'I see some paint, was it orange or red paint originally?'</li></ul> |
| 33 | <ul><li>"We had to rescue a little male black-chinned hummer today. He had somehow managed to skewer a bug and got it stuck, holding his beak closed. We watched him for 2 days, trying to scratch and rub it, but just couldn't get it off."</li><li>"The Chevy Bolt in the chicken coop is to keep animals out, I bet. I have friends who own a Chevy Bolt, and *twice* they've had squirrels eat through the electrical wiring in their car. Apparently, the wires are wrapped in a soy-based coating that hungry animals like to nibble on."</li></ul> |
| 35 | <ul><li>'Squatted, white, late model GMC 2500. Gasser, RWD, with the thinnest of tires. Painted on almost.'</li><li>'Yes my rear seats ultimate leather is thinner than paper and is pealing away after 4 weeks owning a 2024 wtf. I want a lifetime warranty as long as I own the truck cheap cheap painted fake leather'</li><li>'"Alexa, put paint thinner on my shopping list."'</li></ul> |
| 30 | <ul><li>'A good steam clean under carriage and some under coating it’s should be good as new'</li><li>'Surface rust. A wire brush, some rust converter then chassis paint will make the frame look like new.'</li><li>'I feel like Claptons "Cocaine" would be more appropriate with that pristine white paint and the t-tops'</li></ul> |
| 17 | <ul><li>'I absolutely love the C8 in white. The paint matched side intake trim looks amazing, too. Excellent choice. 10/10.'</li><li>'Acs in Canada has perfect carbon flash matching spoilers. I think they paint colors as well. Very high quality'</li></ul> |
| 23 | <ul><li>'Theres no guarantee about the trans, got my 17 w/ 90k miles and still had to fix the shudder myself. Things to look for... Oil lines in engine bay (known to blow around 100k miles) Rust underneath All lighting, inside AND out. The center cluster is known to have dull lights/ bulbs (icons) Use the chair, rock back and forth and everything to see if anything falls short Every detail matters, inspect paint, edges, even under the bed (lot of dust and dirt collects under the tailgate)'</li><li>'Idk how this came up on my feed. Nice paint combo. That town looks boring.'</li><li>'What do you recomend for putting a coating on rubber mats. I hate how dull they look when youre done with a good detail.'</li></ul> |
| 10 | <ul><li>"I'd tape off the bottom of the bumper where it starts to taper down, conveniently where your scratches start....and I'd paint the entire bottom underside bumper black. So all the scratches will be covered in black, then add a splitter. No one would ever know except you. I'd use a decent quality spray paint and make sure to either remove the bumper from the car or get it high enough to cleanly spray the paint proper. 3 or 4 coats of black, couple coats of clear. Maaaybe"</li><li>'What’s going on in pic 3? That looks bizarre!! How could that be missed before delivery. Might have a tougher time with the small interior scratch.'</li><li>'I had a sweet ‘71 Chevy Blazer and some knuckleheads aggressively used a screwdriver to pry out the cheapest ‘97 Jensen CD player scratching up the original metal dash. Insult to injury - disc 4 from Bob Dylan’s box set ‘biograph’ was in there'</li></ul> |
| 29 | <ul><li>'I am original owner of a 2001 Sierra 1500 SLE w/5.3L Z71 Have 152,000mi on it. It’s also part of the infamous cracked head list. The body and frame are in fantastic shape as is the original paint. No rust anywhere. I attribute this to maybe washing it 30x in its lifetime. :) Can I replace the engine with a newer LS or is it best to rebuild these? I’m not looking for massive power, just reliability.'</li></ul> |
| 34 | <ul><li>'Wonder if it traps moisture under it or has any vibration that could cause paint wear. I love the looks of this tho!'</li><li>'I bet it’s from the brushes. The finish on the paint does not like the rough brushes from car washes. When I got my bolt they told me absolutely do not take it to a car wash. Hand wash only or this could happen'</li></ul> |
| 31 | <ul><li>'Customer service Ordered a new leer 100xr topper for my 2021 gmc. Waited over 2months for it to be built and when dealer installed it i noticed that the window gasket is half way out on one side and then there is a couple rough grinder marks on the lower edges. Not to mention when the dealer installed it they mounted it out of alignment and by the time i drove from dealer to my house it already rubbed the paint on the corner of my bedside above taillight completely off down to the bare metal.'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("bhaskars113/toyota-paint-attribute-1.2")
# Run inference
preds = model("Oh and consider yourself blessed you got meteorite, I have sonic and swirl marks and scratches are so easily seen, with grey it hides much better")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 46.2451 | 924 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 16 |
| 1 | 16 |
| 2 | 16 |
| 3 | 16 |
| 4 | 16 |
| 5 | 16 |
| 6 | 16 |
| 7 | 16 |
| 8 | 16 |
| 9 | 16 |
| 10 | 4 |
| 11 | 11 |
| 12 | 20 |
| 13 | 13 |
| 14 | 16 |
| 15 | 2 |
| 16 | 20 |
| 17 | 2 |
| 18 | 8 |
| 19 | 5 |
| 20 | 14 |
| 21 | 15 |
| 22 | 3 |
| 23 | 5 |
| 24 | 18 |
| 25 | 3 |
| 26 | 13 |
| 27 | 7 |
| 28 | 1 |
| 29 | 1 |
| 30 | 4 |
| 31 | 1 |
| 32 | 1 |
| 33 | 2 |
| 34 | 2 |
| 35 | 4 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0011 | 1 | 0.1689 | - |
| 0.0563 | 50 | 0.2155 | - |
| 0.1126 | 100 | 0.139 | - |
| 0.1689 | 150 | 0.0656 | - |
| 0.2252 | 200 | 0.0359 | - |
| 0.2815 | 250 | 0.0462 | - |
| 0.3378 | 300 | 0.0182 | - |
| 0.3941 | 350 | 0.0235 | - |
| 0.4505 | 400 | 0.0401 | - |
| 0.5068 | 450 | 0.042 | - |
| 0.5631 | 500 | 0.0461 | - |
| 0.6194 | 550 | 0.0034 | - |
| 0.6757 | 600 | 0.0181 | - |
| 0.7320 | 650 | 0.0094 | - |
| 0.7883 | 700 | 0.0584 | - |
| 0.8446 | 750 | 0.0175 | - |
| 0.9009 | 800 | 0.0036 | - |
| 0.9572 | 850 | 0.0274 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.2
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
dsfsi/simcse-dna | dsfsi | 2024-05-20T19:28:53Z | 36 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"DNA",
"biology",
"genomics",
"protein",
"kmer",
"cancer",
"gleason-grade-group",
"arxiv:2104.08821",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T08:57:57Z | ---
license: cc-by-sa-4.0
tags:
- DNA
- biology
- genomics
- protein
- kmer
- cancer
- gleason-grade-group
---
## Project Description
This repository contains the trained model for our paper: **Fine-tuning a Sentence Transformer for DNA & Protein tasks** that is currently under review at BMC Bioinformatics. This model, called **simcse-dna**; is based on the original implementation of **SimCSE [1]**. The original model was adapted for DNA downstream tasks by training it on a small sample size k-mer tokens generated from the human reference genome, and can be used to generate sentence embeddings for DNA tasks.
### Prerequisites
-----------
Please see the original [SimCSE](https://github.com/princeton-nlp/SimCSE) for installation details. The model will als be hosted on Zenodo (DOI: 10.5281/zenodo.11046580).
### Usage
Run the following code to get the sentence embeddings:
```python
import torch
from transformers import AutoModel, AutoTokenizer
# Import trained model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("dsfsi/simcse-dna")
model = AutoModel.from_pretrained("dsfsi/simcse-dna")
#sentences is your list of n DNA tokens of size 6
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
# Get the embeddings
with torch.no_grad():
embeddings = model(**inputs, output_hidden_states=True, return_dict=True).pooler_output
```
The retrieved embeddings can be utilized as input for a machine learning classifier to perform classification.
## Performance on evaluation tasks
Find out more about the datasets and access in the paper **(TBA)**
### Task 1: Detection of colorectal cancer cases (after oversampling)
| | 5-fold Cross Validation accuracy | Test accuracy |
| --- | --- | ---|
| LightGBM | 91 | 63 |
| Random Forest | **94** | **71** |
| XGBoost | 93 | 66 |
| CNN | 42 | 52 |
| | 5-fold Cross Validation F1 | Test F1 |
| --- | --- | ---|
| LightGBM | 91 | 66 |
| Random Forest | **94** | **72** |
| XGBoost | 93 | 66 |
| CNN | 41 | 60 |
### Task 2: Prediction of the Gleason grade group (after oversampling)
| | 5-fold Cross Validation accuracy | Test accuracy |
| --- | --- | ---|
| LightGBM | 97 | 68 |
| Random Forest | **98** | **78** |
| XGBoost |97 | 70 |
| CNN | 35 | 50 |
| | 5-fold Cross Validation F1 | Test F1 |
| --- | --- | ---|
| LightGBM | 97 | 70 |
| Random Forest | **98** | **80** |
| XGBoost |97 | 70 |
| CNN | 33 | 59 |
### Task 3: Detection of human TATA sequences (after oversampling)
| | 5-fold Cross Validation accuracy | Test accuracy |
| --- | --- | ---|
| LightGBM | 98 | 93 |
| Random Forest | **99** | **96** |
| XGBoost |**99** | 95 |
| CNN | 38 | 59 |
| | 5-fold Cross Validation F1 | Test F1 |
| --- | --- | ---|
| LightGBM | 98 | 92 |
| Random Forest | **99** | **95** |
| XGBoost | **99** | 92 |
| CNN | 58 | 10 |
## Authors
-----------
* Mpho Mokoatle, Vukosi Marivate, Darlington Mapiye, Riana Bornman, Vanessa M. Hayes
* Contact details : [email protected]
## Citation
-----------
Bibtex Reference **TBA**
### References
<a id="1">[1]</a>
Gao, Tianyu, Xingcheng Yao, and Danqi Chen. "Simcse: Simple contrastive learning of sentence embeddings." arXiv preprint arXiv:2104.08821 (2021). |
ssmits/Falcon2-5.5B-Swedish-GGUF | ssmits | 2024-05-20T19:28:18Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"sv",
"base_model:tiiuae/falcon-11B",
"base_model:quantized:tiiuae/falcon-11B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T18:43:10Z | ---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model:
- tiiuae/falcon-11B
license: apache-2.0
language:
- sv
---
# ssmits/Falcon2-5.5B-Swedish-Q5_K_M-GGUF
This model was converted to GGUF format from [`ssmits/Falcon2-5.5B-Swedish`](https://huggingface.co/ssmits/Falcon2-5.5B-Swedish) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ssmits/Falcon2-5.5B-Swedish) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo ssmits/Falcon2-5.5B-Swedish-Q5_K_M-GGUF --model falcon2-5.5b-swedish.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo ssmits/Falcon2-5.5B-Swedish-Q5_K_M-GGUF --model falcon2-5.5b-swedish.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m falcon2-5.5b-swedish.Q5_K_M.gguf -n 128
``` |
mizoru/whisper-large-ru-ORD_0.9_peft_0.2 | mizoru | 2024-05-20T19:27:57Z | 3 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"ru",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T15:24:08Z | ---
language:
- ru
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: openai/whisper-large-v2
metrics:
- wer
model-index:
- name: 'Whisper Large Ru ORD 0.9 Peft PEFT 4-bit Q DoRA - Mizoru '
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/mizoru/ORD/runs/te5djaa5)
# Whisper Large Ru ORD 0.9 Peft PEFT 4-bit Q DoRA - Mizoru
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the ORD_0.9 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9988
- Wer: 48.4439
- Cer: 26.5242
- Clean Wer: 40.8650
- Clean Cer: 20.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Clean Cer | Clean Wer | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:-------:|:---------:|:---------:|:---------------:|:-------:|
| 1.216 | 1.0 | 550 | 27.9352 | 22.0432 | 43.2693 | 1.0350 | 50.7505 |
| 1.1847 | 2.0 | 1100 | 26.5324 | 20.9303 | 41.2903 | 1.0187 | 49.1670 |
| 1.055 | 3.0 | 1650 | 26.7141 | 21.0494 | 41.5960 | 0.9889 | 48.8428 |
| 0.9137 | 4.0 | 2200 | 0.9988 | 48.4439 | 26.5242 | 40.8650 | 20.9832 |
### Framework versions
- PEFT 0.11.2.dev0
- Transformers 4.41.0.dev0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1 |
hfdsajkfd/distilbert-base-uncased-finetuned-ner | hfdsajkfd | 2024-05-20T19:27:42Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-20T19:23:21Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9259054770318021
- name: Recall
type: recall
value: 0.9380243875153821
- name: F1
type: f1
value: 0.9319255348707974
- name: Accuracy
type: accuracy
value: 0.9837641190207635
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0613
- Precision: 0.9259
- Recall: 0.9380
- F1: 0.9319
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2448 | 1.0 | 878 | 0.0713 | 0.8957 | 0.9193 | 0.9074 | 0.9796 |
| 0.0517 | 2.0 | 1756 | 0.0597 | 0.9206 | 0.9357 | 0.9281 | 0.9830 |
| 0.0314 | 3.0 | 2634 | 0.0613 | 0.9259 | 0.9380 | 0.9319 | 0.9838 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
matthieuzone/FOURME_D_AMBERTbis | matthieuzone | 2024-05-20T19:26:21Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T19:18:11Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/FOURME_D_AMBERTbis
<Gallery />
## Model description
These are matthieuzone/FOURME_D_AMBERTbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/FOURME_D_AMBERTbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Arshia-HZ/NLP-AriaBert-Digimag | Arshia-HZ | 2024-05-20T19:26:20Z | 121 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T18:33:40Z | ---
license: apache-2.0
language:
- fa
widget:
- text: "دختری در قطار؛ پرفروشترین کتاب نیویورکتایمز را امروز رایگان بخوانید کتاب دختری در قطار هدیه امروز فیدیبو است."
- text: "استرینگکست: با ترسناکترین بیماری جهان آشنا شوید با گذر زمان و پیشرفت امکانات، سن انسانها روز بهروز بیشتر میشود. ولی با این بالا رفتن سن، بیماریهای جدید و خطرناکی خودشون را به ما نشان میدهند."
---
## Persian Text Classification [DigiMag, Persian News]
The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`.
### DigiMag
A total of 8,515 articles scraped from [Digikala Online Magazine](https://www.digikala.com/mag/). This dataset includes seven different classes.
1. Video Games
2. Shopping Guide
3. Health Beauty
4. Science Technology
5. General
6. Art Cinema
7. Books Literature
| Label | # |
|:------------------:|:----:|
| Video Games | 1967 |
| Shopping Guide | 125 |
| Health Beauty | 1610 |
| Science Technology | 2772 |
| General | 120 |
| Art Cinema | 1667 |
| Books Literature | 254 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=1YgrCYY-Z0h2z0-PfWVfOGt1Tv0JDI-qz)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT |
|:-----------------:|:-----------:|:-----------:|:-----:|
| Digikala Magazine | 93.65* | 93.59 | 90.72 | |
konstaya/qa_model_study_1 | konstaya | 2024-05-20T19:19:59Z | 131 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:sberquad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-20T17:12:23Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- sberquad
model-index:
- name: qa_model_study_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa_model_study_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the sberquad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1351 | 1.0 | 750 | 2.6338 |
| 2.5385 | 2.0 | 1500 | 2.4813 |
| 2.3433 | 3.0 | 2250 | 2.4337 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
1aurent/vit_base_patch14_224.dinobloom | 1aurent | 2024-05-20T19:17:56Z | 30 | 1 | timm | [
"timm",
"safetensors",
"feature-extraction",
"image-classification",
"arxiv:2404.05022",
"license:apache-2.0",
"region:us"
] | feature-extraction | 2024-05-20T18:44:40Z | ---
tags:
- timm
- feature-extraction
- image-classification
library_name: timm
license: apache-2.0
---
# Model card for vit_base_patch14_224.dinobloom

## Model Details
- **Model Type:** Feature backbone
- **Model Stats:**
- Params: 86M (base)
- Image size: 224 x 224 x 3
- Patch size: 14 x 14 x 3
- **Repository:** [github.com:marrlab/DinoBloom](https://github.com/marrlab/DinoBloom)
- **Original Weights:** [Zenodo](https://zenodo.org/records/10908163)
- **License:** [Apache License 2.0](https://github.com/marrlab/DinoBloom/blob/main/LICENSE)
- **Papers:**
- [DinoBloom: A Foundation Model for Generalizable Cell Embeddings in Hematology](https://arxiv.org/abs/2404.05022)
## Model Usage
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
# get example histology image
img = Image.open(
urlopen(
"https://raw.githubusercontent.com/zxaoyou/segmentation_WBC/master/Dataset%201/001.bmp"
)
)
# load model from the hub
model = timm.create_model(
model_name="hf-hub:1aurent/vit_base_patch14_224.dinobloom",
pretrained=True,
).eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
data = transforms(img).unsqueeze(0) # input is a (batch_size, num_channels, img_size, img_size) shaped tensor
output = model(data) # output is a (batch_size, num_features) shaped tensor
```
## Citation
```bibtex
@misc{koch2024dinobloom,
title = {DinoBloom: A Foundation Model for Generalizable Cell Embeddings in Hematology},
author = {Valentin Koch and Sophia J. Wagner and Salome Kazeminia and Ece Sancar and Matthias Hehr and Julia Schnabel and Tingying Peng and Carsten Marr},
year = {2024},
eprint = {2404.05022},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
``` |
esenergun/custom-GPT | esenergun | 2024-05-20T19:15:39Z | 232 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-20T19:15:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
comet24082002/finetune_bge_simsce_V2 | comet24082002 | 2024-05-20T19:14:36Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-20T19:13:22Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# comet24082002/finetune_bge_simsce_V2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('comet24082002/finetune_bge_simsce_V2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=comet24082002/finetune_bge_simsce_V2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5375 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CachedMultipleNegativesRankingLoss.CachedMultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ssmits/Falcon2-5.5B-Italian-GGUF | ssmits | 2024-05-20T19:12:36Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"it",
"base_model:tiiuae/falcon-11B",
"base_model:quantized:tiiuae/falcon-11B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T18:45:17Z | ---
language:
- it
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
- lazymergekit
- llama-cpp
- gguf-my-repo
base_model:
- tiiuae/falcon-11B
---
# ssmits/Falcon2-5.5B-Italian-Q5_K_M-GGUF
This model was converted to GGUF format from [`ssmits/Falcon2-5.5B-Italian`](https://huggingface.co/ssmits/Falcon2-5.5B-Italian) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ssmits/Falcon2-5.5B-Italian) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo ssmits/Falcon2-5.5B-Italian-Q5_K_M-GGUF --model falcon2-5.5b-italian.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo ssmits/Falcon2-5.5B-Italian-Q5_K_M-GGUF --model falcon2-5.5b-italian.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m falcon2-5.5b-italian.Q5_K_M.gguf -n 128
```
|
HariprasathSB/whisper-vulnerablee | HariprasathSB | 2024-05-20T19:11:16Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:HariprasathSB/whisper-vulnerable",
"base_model:finetune:HariprasathSB/whisper-vulnerable",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-20T17:05:07Z | ---
license: apache-2.0
base_model: HariprasathSB/whisper-vulnerable
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-vulnerablee
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-vulnerablee
This model is a fine-tuned version of [HariprasathSB/whisper-vulnerable](https://huggingface.co/HariprasathSB/whisper-vulnerable) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0136
- Wer: 77.9557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0637 | 1.7621 | 200 | 1.0136 | 77.9557 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
fine-tuned/jina-embeddings-v2-base-en-5202024-6tkj-webapp | fine-tuned | 2024-05-20T19:10:10Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Marketing",
"Analytics",
"CRM",
"Data",
"Insights",
"custom_code",
"en",
"dataset:fine-tuned/jina-embeddings-v2-base-en-5202024-6tkj-webapp",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-20T19:09:16Z | ---
license: apache-2.0
datasets:
- fine-tuned/jina-embeddings-v2-base-en-5202024-6tkj-webapp
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Marketing
- Analytics
- CRM
- Data
- Insights
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
educational content for customer insights and marketing strategies
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/jina-embeddings-v2-base-en-5202024-6tkj-webapp',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
BilalMuftuoglu/beit-base-patch16-224-75-fold4 | BilalMuftuoglu | 2024-05-20T19:10:07Z | 12 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T18:47:50Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-75-fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9534883720930233
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-75-fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2509
- Accuracy: 0.9535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.5130 | 0.7907 |
| No log | 2.0 | 4 | 0.4861 | 0.7907 |
| No log | 3.0 | 6 | 0.4775 | 0.7907 |
| No log | 4.0 | 8 | 0.4419 | 0.7907 |
| 0.4909 | 5.0 | 10 | 0.3672 | 0.8605 |
| 0.4909 | 6.0 | 12 | 0.3301 | 0.8837 |
| 0.4909 | 7.0 | 14 | 0.3131 | 0.8837 |
| 0.4909 | 8.0 | 16 | 0.4535 | 0.8605 |
| 0.4909 | 9.0 | 18 | 0.3088 | 0.8372 |
| 0.3473 | 10.0 | 20 | 0.4453 | 0.8837 |
| 0.3473 | 11.0 | 22 | 0.4234 | 0.8605 |
| 0.3473 | 12.0 | 24 | 0.3601 | 0.8837 |
| 0.3473 | 13.0 | 26 | 0.3658 | 0.9070 |
| 0.3473 | 14.0 | 28 | 0.3081 | 0.8837 |
| 0.2903 | 15.0 | 30 | 0.4128 | 0.8837 |
| 0.2903 | 16.0 | 32 | 0.2555 | 0.8605 |
| 0.2903 | 17.0 | 34 | 0.3341 | 0.8837 |
| 0.2903 | 18.0 | 36 | 0.2427 | 0.8837 |
| 0.2903 | 19.0 | 38 | 0.4325 | 0.8372 |
| 0.2673 | 20.0 | 40 | 0.2637 | 0.9070 |
| 0.2673 | 21.0 | 42 | 0.2919 | 0.8837 |
| 0.2673 | 22.0 | 44 | 0.3139 | 0.8837 |
| 0.2673 | 23.0 | 46 | 0.2411 | 0.8837 |
| 0.2673 | 24.0 | 48 | 0.4645 | 0.9070 |
| 0.2103 | 25.0 | 50 | 0.5084 | 0.8605 |
| 0.2103 | 26.0 | 52 | 0.2308 | 0.9070 |
| 0.2103 | 27.0 | 54 | 0.3450 | 0.8605 |
| 0.2103 | 28.0 | 56 | 0.3444 | 0.8605 |
| 0.2103 | 29.0 | 58 | 0.2546 | 0.9070 |
| 0.1673 | 30.0 | 60 | 0.9117 | 0.8140 |
| 0.1673 | 31.0 | 62 | 0.8437 | 0.8140 |
| 0.1673 | 32.0 | 64 | 0.6758 | 0.8372 |
| 0.1673 | 33.0 | 66 | 0.8019 | 0.8140 |
| 0.1673 | 34.0 | 68 | 0.3364 | 0.8837 |
| 0.1677 | 35.0 | 70 | 0.2928 | 0.8837 |
| 0.1677 | 36.0 | 72 | 0.2547 | 0.9070 |
| 0.1677 | 37.0 | 74 | 0.2969 | 0.8837 |
| 0.1677 | 38.0 | 76 | 0.5706 | 0.8837 |
| 0.1677 | 39.0 | 78 | 0.7006 | 0.8837 |
| 0.1407 | 40.0 | 80 | 0.4321 | 0.8837 |
| 0.1407 | 41.0 | 82 | 0.4366 | 0.8837 |
| 0.1407 | 42.0 | 84 | 0.3956 | 0.8837 |
| 0.1407 | 43.0 | 86 | 0.2290 | 0.8372 |
| 0.1407 | 44.0 | 88 | 0.3665 | 0.8837 |
| 0.1474 | 45.0 | 90 | 0.4465 | 0.8605 |
| 0.1474 | 46.0 | 92 | 0.7279 | 0.8605 |
| 0.1474 | 47.0 | 94 | 0.5259 | 0.8605 |
| 0.1474 | 48.0 | 96 | 0.5832 | 0.8837 |
| 0.1474 | 49.0 | 98 | 0.7328 | 0.8837 |
| 0.1344 | 50.0 | 100 | 0.3890 | 0.8837 |
| 0.1344 | 51.0 | 102 | 0.2642 | 0.8837 |
| 0.1344 | 52.0 | 104 | 0.3710 | 0.9070 |
| 0.1344 | 53.0 | 106 | 0.4773 | 0.9070 |
| 0.1344 | 54.0 | 108 | 0.3628 | 0.9302 |
| 0.1166 | 55.0 | 110 | 0.4389 | 0.9070 |
| 0.1166 | 56.0 | 112 | 0.4813 | 0.9070 |
| 0.1166 | 57.0 | 114 | 0.5328 | 0.9070 |
| 0.1166 | 58.0 | 116 | 0.5342 | 0.9070 |
| 0.1166 | 59.0 | 118 | 0.4892 | 0.9070 |
| 0.097 | 60.0 | 120 | 0.5857 | 0.9070 |
| 0.097 | 61.0 | 122 | 0.6681 | 0.9070 |
| 0.097 | 62.0 | 124 | 0.5947 | 0.9070 |
| 0.097 | 63.0 | 126 | 0.4749 | 0.9070 |
| 0.097 | 64.0 | 128 | 0.6091 | 0.8837 |
| 0.1076 | 65.0 | 130 | 0.9725 | 0.8605 |
| 0.1076 | 66.0 | 132 | 1.1372 | 0.8140 |
| 0.1076 | 67.0 | 134 | 0.7109 | 0.8605 |
| 0.1076 | 68.0 | 136 | 0.3549 | 0.9302 |
| 0.1076 | 69.0 | 138 | 0.2709 | 0.9302 |
| 0.0914 | 70.0 | 140 | 0.3316 | 0.9302 |
| 0.0914 | 71.0 | 142 | 0.3176 | 0.9302 |
| 0.0914 | 72.0 | 144 | 0.2509 | 0.9535 |
| 0.0914 | 73.0 | 146 | 0.2256 | 0.9070 |
| 0.0914 | 74.0 | 148 | 0.2570 | 0.9070 |
| 0.0815 | 75.0 | 150 | 0.3081 | 0.9535 |
| 0.0815 | 76.0 | 152 | 0.4199 | 0.9302 |
| 0.0815 | 77.0 | 154 | 0.4324 | 0.9302 |
| 0.0815 | 78.0 | 156 | 0.3928 | 0.9302 |
| 0.0815 | 79.0 | 158 | 0.3700 | 0.9302 |
| 0.0878 | 80.0 | 160 | 0.3812 | 0.9302 |
| 0.0878 | 81.0 | 162 | 0.4300 | 0.9302 |
| 0.0878 | 82.0 | 164 | 0.4289 | 0.9302 |
| 0.0878 | 83.0 | 166 | 0.4125 | 0.9302 |
| 0.0878 | 84.0 | 168 | 0.4351 | 0.9302 |
| 0.0725 | 85.0 | 170 | 0.5046 | 0.9302 |
| 0.0725 | 86.0 | 172 | 0.5692 | 0.9070 |
| 0.0725 | 87.0 | 174 | 0.5486 | 0.9070 |
| 0.0725 | 88.0 | 176 | 0.5310 | 0.9302 |
| 0.0725 | 89.0 | 178 | 0.4662 | 0.9302 |
| 0.0944 | 90.0 | 180 | 0.4070 | 0.9302 |
| 0.0944 | 91.0 | 182 | 0.3768 | 0.9302 |
| 0.0944 | 92.0 | 184 | 0.3884 | 0.9302 |
| 0.0944 | 93.0 | 186 | 0.3851 | 0.9302 |
| 0.0944 | 94.0 | 188 | 0.3759 | 0.9302 |
| 0.0739 | 95.0 | 190 | 0.3608 | 0.9302 |
| 0.0739 | 96.0 | 192 | 0.3456 | 0.9302 |
| 0.0739 | 97.0 | 194 | 0.3360 | 0.9302 |
| 0.0739 | 98.0 | 196 | 0.3312 | 0.9302 |
| 0.0739 | 99.0 | 198 | 0.3321 | 0.9302 |
| 0.0612 | 100.0 | 200 | 0.3331 | 0.9302 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
matthieuzone/EPOISSESbis | matthieuzone | 2024-05-20T19:09:21Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T19:01:09Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/EPOISSESbis
<Gallery />
## Model description
These are matthieuzone/EPOISSESbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/EPOISSESbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
ifisch/gruene-gpt2 | ifisch | 2024-05-20T19:05:42Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T19:03:32Z | ---
license: apache-2.0
---
|
PabitraJiban/Credit-card-collection-intent-classification | PabitraJiban | 2024-05-20T19:02:56Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T19:00:22Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8798
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0973 | 1.0 | 2 | 1.0807 | 0.4667 |
| 1.0801 | 2.0 | 4 | 1.0622 | 0.5333 |
| 1.0713 | 3.0 | 6 | 1.0386 | 0.5333 |
| 1.0396 | 4.0 | 8 | 1.0092 | 0.6 |
| 1.0034 | 5.0 | 10 | 0.9786 | 0.8 |
| 0.9929 | 6.0 | 12 | 0.9501 | 0.8667 |
| 0.9552 | 7.0 | 14 | 0.9236 | 0.8667 |
| 0.9386 | 8.0 | 16 | 0.9011 | 0.8667 |
| 0.9084 | 9.0 | 18 | 0.8862 | 0.8667 |
| 0.897 | 10.0 | 20 | 0.8798 | 0.8667 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
M-Amaral/my_test_mind_model | M-Amaral | 2024-05-20T18:58:44Z | 160 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-05-20T18:58:22Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_test_mind_model
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.05309734513274336
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_test_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6504
- Accuracy: 0.0531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6280 | 0.0885 |
| No log | 1.8667 | 7 | 2.6431 | 0.0619 |
| 2.6374 | 2.9333 | 11 | 2.6414 | 0.0973 |
| 2.6374 | 4.0 | 15 | 2.6483 | 0.0619 |
| 2.6374 | 4.8 | 18 | 2.6465 | 0.0619 |
| 2.6274 | 5.8667 | 22 | 2.6483 | 0.0708 |
| 2.6274 | 6.9333 | 26 | 2.6508 | 0.0708 |
| 2.6228 | 8.0 | 30 | 2.6504 | 0.0531 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
tarsssss/my_model | tarsssss | 2024-05-20T18:57:32Z | 163 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-05-20T18:51:29Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
model-index:
- name: my_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
davelotito/donut_experiment_bayesian_trial_2 | davelotito | 2024-05-20T18:56:12Z | 47 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-20T18:10:18Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
metrics:
- bleu
- wer
model-index:
- name: donut_experiment_bayesian_trial_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_experiment_bayesian_trial_2
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4983
- Bleu: 0.0695
- Precisions: [0.8257261410788381, 0.7717647058823529, 0.7255434782608695, 0.6816720257234726]
- Brevity Penalty: 0.0928
- Length Ratio: 0.2961
- Translation Length: 482
- Reference Length: 1628
- Cer: 0.7610
- Wer: 0.8275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00015752383448484097
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|
| 0.3017 | 1.0 | 253 | 0.7248 | 0.0641 | [0.7525150905432596, 0.65, 0.587467362924282, 0.5276073619631901] | 0.1027 | 0.3053 | 497 | 1628 | 0.7622 | 0.8495 |
| 0.1875 | 2.0 | 506 | 0.6129 | 0.0670 | [0.7914110429447853, 0.7152777777777778, 0.6613333333333333, 0.60062893081761] | 0.0974 | 0.3004 | 489 | 1628 | 0.7565 | 0.8375 |
| 0.1171 | 3.0 | 759 | 0.5027 | 0.0697 | [0.8202479338842975, 0.7587822014051522, 0.7162162162162162, 0.6741214057507987] | 0.0941 | 0.2973 | 484 | 1628 | 0.7563 | 0.8293 |
| 0.0432 | 4.0 | 1012 | 0.4983 | 0.0695 | [0.8257261410788381, 0.7717647058823529, 0.7255434782608695, 0.6816720257234726] | 0.0928 | 0.2961 | 482 | 1628 | 0.7610 | 0.8275 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.0
- Datasets 2.18.0
- Tokenizers 0.19.1
|
matthieuzone/COMTEbis | matthieuzone | 2024-05-20T18:52:16Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T18:44:06Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/COMTEbis
<Gallery />
## Model description
These are matthieuzone/COMTEbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/COMTEbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
matthieuzone/CHEVREbis | matthieuzone | 2024-05-20T18:43:49Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T18:35:41Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/CHEVREbis
<Gallery />
## Model description
These are matthieuzone/CHEVREbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/CHEVREbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
1aurent/vit_small_patch14_224.dinobloom | 1aurent | 2024-05-20T18:38:08Z | 32 | 0 | timm | [
"timm",
"safetensors",
"feature-extraction",
"image-classification",
"arxiv:2404.05022",
"license:apache-2.0",
"region:us"
] | feature-extraction | 2024-05-20T17:47:52Z | ---
tags:
- timm
- feature-extraction
- image-classification
library_name: timm
license: apache-2.0
---
# Model card for vit_small_patch14_224.dinobloom

## Model Details
- **Model Type:** Feature backbone
- **Model Stats:**
- Params: 22M (small)
- Image size: 224 x 224 x 3
- Patch size: 14 x 14 x 3
- **Repository:** [github.com:marrlab/DinoBloom](https://github.com/marrlab/DinoBloom)
- **Original Weights:** [Zenodo](https://zenodo.org/records/10908163)
- **License:** [Apache License 2.0](https://github.com/marrlab/DinoBloom/blob/main/LICENSE)
- **Papers:**
- [DinoBloom: A Foundation Model for Generalizable Cell Embeddings in Hematology](https://arxiv.org/abs/2404.05022)
## Model Usage
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
# get example histology image
img = Image.open(
urlopen(
"https://raw.githubusercontent.com/zxaoyou/segmentation_WBC/master/Dataset%201/001.bmp"
)
)
# load model from the hub
model = timm.create_model(
model_name="hf-hub:1aurent/vit_small_patch14_224.dinobloom",
pretrained=True,
).eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
data = transforms(img).unsqueeze(0) # input is a (batch_size, num_channels, img_size, img_size) shaped tensor
output = model(data) # output is a (batch_size, num_features) shaped tensor
```
## Citation
```bibtex
@misc{koch2024dinobloom,
title = {DinoBloom: A Foundation Model for Generalizable Cell Embeddings in Hematology},
author = {Valentin Koch and Sophia J. Wagner and Salome Kazeminia and Ece Sancar and Matthias Hehr and Julia Schnabel and Tingying Peng and Carsten Marr},
year = {2024},
eprint = {2404.05022},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
``` |
matthieuzone/CHEDDARbis | matthieuzone | 2024-05-20T18:35:26Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T18:27:17Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/CHEDDARbis
<Gallery />
## Model description
These are matthieuzone/CHEDDARbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/CHEDDARbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
BugMaker-Boyan/text2sql_schema_item_classifier_bird | BugMaker-Boyan | 2024-05-20T18:33:52Z | 4 | 0 | transformers | [
"transformers",
"roberta",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-20T18:10:24Z | ---
license: apache-2.0
---
|
mii-llm/minerva-chat-v0.1-alpha-sft | mii-llm | 2024-05-20T18:30:09Z | 5,641 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"minerva",
"sft",
"conversational",
"it",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T17:52:59Z | ---
license: cc-by-nc-4.0
language:
- it
tags:
- minerva
- sft
---
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft |
maneln/tinyllama2 | maneln | 2024-05-20T18:26:30Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T15:59:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mullerjo/poca-SoccerTwos | Mullerjo | 2024-05-20T18:25:51Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-05-20T18:23:22Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Mullerjo/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
armaniii/llama-argument-classification | armaniii | 2024-05-20T18:21:19Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-classification",
"arxiv:2405.00828",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-13T18:57:13Z | ---
library_name: transformers
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("armaniii/llama-argument-classification")
tokenizer = AutoTokenizer.from_pretrained("armaniii/lllama-argument-classification")
model.to(device)
model.eval()
for batch in tqdm.tqdm(data):
with torch.no_grad():
input_text = tokenizer(batch, padding=True, truncation=True,max_length=2048,return_tensors="pt").to(device)
output = model(**input_text)
logits = output.logits
predicted_class = torch.argmax(logits, dim=1)
# Convert logits to a list of predicted labels
predictions.extend(predicted_class.cpu().tolist())
# Get the ground truth labels
df["predictions"] = predictions
num2label = {
0:"NoArgument",
1:"Argument"
}
```
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
(https://arxiv.org/abs/2405.00828)
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DukeNLP/Prob-Gen-8B | DukeNLP | 2024-05-20T18:21:14Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T16:17:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
This model has been fine-tuned using 4-bit QLORA, based on [Llama-3-8B from Meta](https://huggingface.co/meta-llama/Meta-Llama-3-8B), and utilizes 3,644 GPT-4-generated grade school math word problems. It generates math word problems with multiple choices within specified contexts.
<!--
## Model Details
### Model Description
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed] -->
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model can be loaded with HuggingFace's Transformers library:
``` python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "DukeNLP/Prob-Gen-8B"
model = AutoModelForCausalLM.from_pretrained(model_id,device_map="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Please generate a math problem and 2 to 4 options for 8th graders with the following requirements:\nProblem context: <specified-context>\nTested knowledge: <specified-knowledge>"
model_input = tokenizer(prompt, return_tensors="pt").to("cuda")
model_output = model.generate(model_input['input_ids'], max_new_tokens=256)
print(tokenizer.batch_decode(model_output))
```
<!-- ## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
<!-- [More Information Needed]
<!-- ### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations.
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
-->
<!-- ## Training Details -->
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model is finetuned on 3,644 GPT-4 generated 8th-grade problems, which are also annotated and evaluated by humans, an example of our data point is shown below:
``` json
"options": [
{
"optionText": "Multiply 500 by 3/5 to get 300 tons.",
"correct": true
},
{
"optionText": "Divide 500 by 3 to get 166.67 tons.",
"correct": false
}
],
"problemContext": "Environmental issues",
"evaluated_problem": "A town's recycling plant recycles plastic and glass in a ratio of 3:2. If the plant processes 500 tons of recyclables, how much of it is plastic?",
"unitTitle": "Solving Multi-Step Problems with Proportional Relationships"
```
### Prompting
The model can be evaluated by using the following prompt:
``` python
"""Please generate a math problem and 2 to 4 options for 8th graders with the following requirements:
Problem context: <specified-context>
Tested knowledge: <specified-knowledge>"""
```
The contexts used in the dataset are:
```
"Video Games",
"Fashion",
"Influencers/YouTubers",
"Apps and Technology",
"Movies/TV shows",
"Sports",
"Music and Concerts",
"Social Media",
"Environmental issues"
```
The tested knowledge in the dataset are:
```
"Operations with Rational Numbers",
"Expressions and Equations",
"Surface Area and Volume",
"Arithmetic in Base Ten",
"Evaluating Numeric Expressions",
"Properties and Theorems of Angles",
"Data Sets",
"Rational Number Arithmetic",
"Functions and Volume",
"Linear Equations and Linear Systems",
"Representing Data and Distributions",
"Algebraic Expressions",
"Ratios and Rates",
"Solving Equations and Systems of Equations",
"Operations with Integers",
"Scatter Plots",
"Solving Percentage Problems with Proportional Relationships",
"Associations in Data",
"Expressions, Equations, and Inequalities",
"Linear Relationships",
"Representing Data",
"Solving Multi-Step Problems with Proportional Relationships",
"Dividing Fractions",
"Area, Surface Area, and Volume",
"Equivalent Algebraic Expressions",
"Key Features of Linear Equations",
"Proportional Relationships and Percentages",
"Transformations",
"Representing Proportional Relationships"
```
### Sample Generation
Here is an example passage from the training data:
```
Please generate a math problem and options for 8th graders with the following requirements:
Problem context: Movies/TV shows
Tested knowledge: Representing Data and Distributions
Question: Lucas counted the number of episodes in 12 seasons of a TV show. He recorded: 48, 51, 52, 55, 56, 58, 59, 60, 61, 62, 65, 67. How should he create a frequency table for the number of episodes?
Option 1: Group the data into intervals of 10, then count the number of seasons in each interval.
Is correct: False
Option 2: Group the data into intervals of 5 starting from 45, then count the number of seasons in each interval.
Is correct: True
```
And here is an example passage generated from the fine-tuned model:
```
Please generate a math problem and 2 to 4 options for 8th graders with the following requirements:
Problem context: Video Games
Tested knowledge: Expressions and Equations
Question: In a video game, the power of a character's weapon is calculated by the formula \(a^b\). If the weapon's power is \(2^{4}\), what is the value of \(a\) and \(b\)?
Option 1: \(a = 2\) and \(b = 4\)
Is correct: True
Option 2: \(a = 4\) and \(b = 2\)
Is correct: False
Option 3: \(a = 2\) and \(b = 2\)
Is correct: False
Option 4: \(a = 2\) and \(b = 8\)
Is correct: False
```
|
BilalMuftuoglu/beit-base-patch16-224-75-fold2 | BilalMuftuoglu | 2024-05-20T18:16:38Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T17:56:18Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-75-fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9534883720930233
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-75-fold2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2685
- Accuracy: 0.9535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.7091 | 0.5349 |
| No log | 2.0 | 4 | 0.6502 | 0.7209 |
| No log | 3.0 | 6 | 0.9193 | 0.6977 |
| No log | 4.0 | 8 | 0.7499 | 0.7442 |
| 0.6436 | 5.0 | 10 | 0.4527 | 0.8140 |
| 0.6436 | 6.0 | 12 | 0.4169 | 0.8372 |
| 0.6436 | 7.0 | 14 | 0.5773 | 0.7442 |
| 0.6436 | 8.0 | 16 | 0.4076 | 0.8605 |
| 0.6436 | 9.0 | 18 | 0.3939 | 0.8605 |
| 0.3863 | 10.0 | 20 | 0.4017 | 0.8605 |
| 0.3863 | 11.0 | 22 | 0.4918 | 0.8140 |
| 0.3863 | 12.0 | 24 | 0.2688 | 0.8372 |
| 0.3863 | 13.0 | 26 | 0.3884 | 0.8140 |
| 0.3863 | 14.0 | 28 | 0.3679 | 0.8140 |
| 0.2925 | 15.0 | 30 | 0.2802 | 0.8837 |
| 0.2925 | 16.0 | 32 | 0.2436 | 0.9070 |
| 0.2925 | 17.0 | 34 | 0.2337 | 0.9302 |
| 0.2925 | 18.0 | 36 | 0.3711 | 0.8140 |
| 0.2925 | 19.0 | 38 | 0.2372 | 0.9302 |
| 0.2289 | 20.0 | 40 | 0.2685 | 0.9535 |
| 0.2289 | 21.0 | 42 | 0.2610 | 0.9070 |
| 0.2289 | 22.0 | 44 | 0.3328 | 0.8372 |
| 0.2289 | 23.0 | 46 | 0.3479 | 0.8372 |
| 0.2289 | 24.0 | 48 | 0.2855 | 0.8837 |
| 0.219 | 25.0 | 50 | 0.2962 | 0.9070 |
| 0.219 | 26.0 | 52 | 0.4038 | 0.9070 |
| 0.219 | 27.0 | 54 | 0.3149 | 0.9070 |
| 0.219 | 28.0 | 56 | 0.3212 | 0.9070 |
| 0.219 | 29.0 | 58 | 0.4895 | 0.8605 |
| 0.1933 | 30.0 | 60 | 0.4335 | 0.8837 |
| 0.1933 | 31.0 | 62 | 0.3521 | 0.8372 |
| 0.1933 | 32.0 | 64 | 0.2960 | 0.8837 |
| 0.1933 | 33.0 | 66 | 0.4037 | 0.8372 |
| 0.1933 | 34.0 | 68 | 0.2913 | 0.8837 |
| 0.1892 | 35.0 | 70 | 0.3043 | 0.8837 |
| 0.1892 | 36.0 | 72 | 0.3602 | 0.9302 |
| 0.1892 | 37.0 | 74 | 0.3315 | 0.9302 |
| 0.1892 | 38.0 | 76 | 0.2674 | 0.9302 |
| 0.1892 | 39.0 | 78 | 0.2970 | 0.9535 |
| 0.15 | 40.0 | 80 | 0.2661 | 0.9535 |
| 0.15 | 41.0 | 82 | 0.2551 | 0.8837 |
| 0.15 | 42.0 | 84 | 0.2467 | 0.9302 |
| 0.15 | 43.0 | 86 | 0.3008 | 0.9535 |
| 0.15 | 44.0 | 88 | 0.3265 | 0.9302 |
| 0.1238 | 45.0 | 90 | 0.2668 | 0.9302 |
| 0.1238 | 46.0 | 92 | 0.2574 | 0.9302 |
| 0.1238 | 47.0 | 94 | 0.2498 | 0.9535 |
| 0.1238 | 48.0 | 96 | 0.3319 | 0.8837 |
| 0.1238 | 49.0 | 98 | 0.2358 | 0.9302 |
| 0.1063 | 50.0 | 100 | 0.2015 | 0.9302 |
| 0.1063 | 51.0 | 102 | 0.2171 | 0.9302 |
| 0.1063 | 52.0 | 104 | 0.3119 | 0.9302 |
| 0.1063 | 53.0 | 106 | 0.2674 | 0.9070 |
| 0.1063 | 54.0 | 108 | 0.3076 | 0.8837 |
| 0.1112 | 55.0 | 110 | 0.3182 | 0.8837 |
| 0.1112 | 56.0 | 112 | 0.3371 | 0.9070 |
| 0.1112 | 57.0 | 114 | 0.3540 | 0.9070 |
| 0.1112 | 58.0 | 116 | 0.4058 | 0.9070 |
| 0.1112 | 59.0 | 118 | 0.4013 | 0.9070 |
| 0.1128 | 60.0 | 120 | 0.3309 | 0.9302 |
| 0.1128 | 61.0 | 122 | 0.3272 | 0.9302 |
| 0.1128 | 62.0 | 124 | 0.4012 | 0.9070 |
| 0.1128 | 63.0 | 126 | 0.5794 | 0.8605 |
| 0.1128 | 64.0 | 128 | 0.3881 | 0.9070 |
| 0.1168 | 65.0 | 130 | 0.2990 | 0.9070 |
| 0.1168 | 66.0 | 132 | 0.3018 | 0.8837 |
| 0.1168 | 67.0 | 134 | 0.2561 | 0.9302 |
| 0.1168 | 68.0 | 136 | 0.2921 | 0.9302 |
| 0.1168 | 69.0 | 138 | 0.3258 | 0.9070 |
| 0.0846 | 70.0 | 140 | 0.2925 | 0.9302 |
| 0.0846 | 71.0 | 142 | 0.3073 | 0.9302 |
| 0.0846 | 72.0 | 144 | 0.3318 | 0.9302 |
| 0.0846 | 73.0 | 146 | 0.3427 | 0.9302 |
| 0.0846 | 74.0 | 148 | 0.3588 | 0.9070 |
| 0.0845 | 75.0 | 150 | 0.3939 | 0.9070 |
| 0.0845 | 76.0 | 152 | 0.3774 | 0.9070 |
| 0.0845 | 77.0 | 154 | 0.3746 | 0.9070 |
| 0.0845 | 78.0 | 156 | 0.4073 | 0.8837 |
| 0.0845 | 79.0 | 158 | 0.3886 | 0.9070 |
| 0.0885 | 80.0 | 160 | 0.3765 | 0.9070 |
| 0.0885 | 81.0 | 162 | 0.3977 | 0.9070 |
| 0.0885 | 82.0 | 164 | 0.3864 | 0.9070 |
| 0.0885 | 83.0 | 166 | 0.3809 | 0.9070 |
| 0.0885 | 84.0 | 168 | 0.4492 | 0.8605 |
| 0.0859 | 85.0 | 170 | 0.5479 | 0.8605 |
| 0.0859 | 86.0 | 172 | 0.5372 | 0.8605 |
| 0.0859 | 87.0 | 174 | 0.4512 | 0.8605 |
| 0.0859 | 88.0 | 176 | 0.3930 | 0.9070 |
| 0.0859 | 89.0 | 178 | 0.3842 | 0.9302 |
| 0.0764 | 90.0 | 180 | 0.3808 | 0.9302 |
| 0.0764 | 91.0 | 182 | 0.3787 | 0.9302 |
| 0.0764 | 92.0 | 184 | 0.3833 | 0.9070 |
| 0.0764 | 93.0 | 186 | 0.3912 | 0.9070 |
| 0.0764 | 94.0 | 188 | 0.3888 | 0.8837 |
| 0.0727 | 95.0 | 190 | 0.3817 | 0.8837 |
| 0.0727 | 96.0 | 192 | 0.3708 | 0.9070 |
| 0.0727 | 97.0 | 194 | 0.3640 | 0.9070 |
| 0.0727 | 98.0 | 196 | 0.3613 | 0.9302 |
| 0.0727 | 99.0 | 198 | 0.3607 | 0.9302 |
| 0.069 | 100.0 | 200 | 0.3605 | 0.9302 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2 | Zoyd | 2024-05-20T18:09:39Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T17:54:43Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2 | Zoyd | 2024-05-20T17:58:19Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-20T17:21:47Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **6.5 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2 | Zoyd | 2024-05-20T17:58:14Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T16:48:57Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **6.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2 | Zoyd | 2024-05-20T17:58:04Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T16:16:02Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
ryandono/fine-tune-paligema | ryandono | 2024-05-20T17:58:03Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T17:55:45Z | ---
license: apache-2.0
---
|
Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2 | Zoyd | 2024-05-20T17:57:49Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-20T14:38:14Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **3.75 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
0xlexor/genesys | 0xlexor | 2024-05-20T17:57:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"region:us"
] | null | 2024-05-20T17:53:16Z | ---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B-Instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2 | Zoyd | 2024-05-20T17:57:33Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T13:33:15Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **3.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2 | Zoyd | 2024-05-20T17:57:20Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-20T13:00:57Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **2.5 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
bunnycore/Blackbird-Llama-3-8B-Q5_K_M-GGUF | bunnycore | 2024-05-20T17:52:44Z | 3 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T17:52:27Z | ---
license: llama2
tags:
- merge
- mergekit
- lazymergekit
- llama-cpp
- gguf-my-repo
---
# bunnycore/Blackbird-Llama-3-8B-Q5_K_M-GGUF
This model was converted to GGUF format from [`bunnycore/Blackbird-Llama-3-8B`](https://huggingface.co/bunnycore/Blackbird-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Blackbird-Llama-3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo bunnycore/Blackbird-Llama-3-8B-Q5_K_M-GGUF --model blackbird-llama-3-8b.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo bunnycore/Blackbird-Llama-3-8B-Q5_K_M-GGUF --model blackbird-llama-3-8b.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m blackbird-llama-3-8b.Q5_K_M.gguf -n 128
```
|
farzanrahmani/AriaBERT_finetuned_digimag_Epoch_3_lr_3e_4_unfreezed | farzanrahmani | 2024-05-20T17:51:22Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T17:50:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sravan-gorugantu/model2024-05-20 | sravan-gorugantu | 2024-05-20T17:50:58Z | 162 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-05-20T12:37:07Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: model2024-05-20
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.96875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model2024-05-20
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0759
- Accuracy: 0.9688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1694 | 1.0 | 321 | 0.1613 | 0.9408 |
| 0.1271 | 2.0 | 642 | 0.1178 | 0.9530 |
| 0.0922 | 3.0 | 963 | 0.1076 | 0.9568 |
| 0.0788 | 4.0 | 1284 | 0.0731 | 0.9691 |
| 0.0766 | 5.0 | 1605 | 0.0759 | 0.9688 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
matthieuzone/BEAUFORTbis | matthieuzone | 2024-05-20T17:49:48Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T17:36:44Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/BEAUFORTbis
<Gallery />
## Model description
These are matthieuzone/BEAUFORTbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/BEAUFORTbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
LoML/distilbert-base-uncased-finetuned-emotion | LoML | 2024-05-20T17:48:30Z | 120 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T16:43:03Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9257130045399095
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2177
- Accuracy: 0.9255
- F1: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8177 | 1.0 | 250 | 0.3034 | 0.9075 | 0.9067 |
| 0.2404 | 2.0 | 500 | 0.2177 | 0.9255 | 0.9257 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
NourFakih/Vit-GPT2-COCO2017Flickr-01 | NourFakih | 2024-05-20T17:43:24Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:NourFakih/image-captioning-Vit-GPT2-Flickr8k",
"base_model:finetune:NourFakih/image-captioning-Vit-GPT2-Flickr8k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-18T22:33:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: NourFakih/image-captioning-Vit-GPT2-Flickr8k
metrics:
- rouge
model-index:
- name: Vit-GPT2-COCO2017Flickr-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Vit-GPT2-COCO2017Flickr-01
This model is a fine-tuned version of [NourFakih/image-captioning-Vit-GPT2-Flickr8k](https://huggingface.co/NourFakih/image-captioning-Vit-GPT2-Flickr8k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2789
- Rouge1: 40.4777
- Rouge2: 15.156
- Rougel: 36.8755
- Rougelsum: 36.8813
- Gen Len: 11.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Gen Len | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:-------:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.2185 | 0.08 | 500 | 11.9627 | 0.2288 | 41.2368 | 15.6218 | 37.5796 | 37.5754 |
| 0.2097 | 0.15 | 1000 | 12.1819 | 0.2266 | 41.0126 | 15.773 | 37.2736 | 37.2843 |
| 0.2067 | 0.23 | 1500 | 11.1865 | 0.2260 | 41.0707 | 15.534 | 37.4934 | 37.5044 |
| 0.1997 | 0.31 | 2000 | 11.4404 | 0.2251 | 41.5488 | 15.8208 | 37.704 | 37.7153 |
| 0.1962 | 0.38 | 2500 | 12.1219 | 0.2241 | 41.6067 | 16.1235 | 37.8372 | 37.8403 |
| 0.1891 | 0.46 | 3000 | 12.0462 | 0.2246 | 41.7488 | 16.5323 | 38.0498 | 38.0689 |
| 0.1942 | 0.54 | 3500 | 11.8842 | 0.2252 | 41.3542 | 15.7955 | 37.8567 | 37.8759 |
| 0.186 | 0.62 | 4000 | 11.6954 | 0.2256 | 41.4582 | 15.8671 | 37.7381 | 37.7557 |
| 0.1822 | 0.69 | 4500 | 11.6962 | 0.2253 | 41.6779 | 15.8426 | 37.9166 | 37.9538 |
| 0.1829 | 0.77 | 5000 | 11.695 | 0.2248 | 41.8987 | 16.4174 | 38.3064 | 38.321 |
| 0.1786 | 0.85 | 5500 | 11.9762 | 0.2251 | 40.9742 | 15.6616 | 37.3227 | 37.3401 |
| 0.1808 | 0.92 | 6000 | 11.7042 | 0.2260 | 41.5023 | 16.0289 | 37.9925 | 37.9843 |
| 0.1758 | 1.0 | 6500 | 11.8888 | 0.2262 | 41.3528 | 16.0559 | 37.8786 | 37.8588 |
| 0.1326 | 1.08 | 7000 | 11.8173 | 0.2394 | 40.7818 | 15.486 | 37.2677 | 37.2794 |
| 0.1291 | 1.15 | 7500 | 11.7969 | 0.2412 | 41.4117 | 16.2382 | 37.9863 | 37.9964 |
| 0.1314 | 1.23 | 8000 | 11.7969 | 0.2436 | 41.1586 | 15.5594 | 37.512 | 37.5293 |
| 0.131 | 1.31 | 8500 | 11.8281 | 0.2427 | 41.1027 | 15.817 | 37.7167 | 37.7216 |
| 0.1322 | 1.38 | 9000 | 11.8927 | 0.2400 | 41.4453 | 16.0873 | 37.7242 | 37.735 |
| 0.1237 | 1.46 | 9500 | 11.8035 | 0.2447 | 40.704 | 15.0054 | 37.1021 | 37.1102 |
| 0.1289 | 1.54 | 10000 | 12.2473 | 0.2441 | 41.0159 | 15.5793 | 37.1366 | 37.1673 |
| 0.1236 | 1.62 | 10500 | 11.6977 | 0.2452 | 40.8137 | 15.3874 | 37.1591 | 37.1672 |
| 0.1241 | 1.69 | 11000 | 11.4181 | 0.2465 | 40.9985 | 15.3879 | 37.1388 | 37.1634 |
| 0.1219 | 1.77 | 11500 | 11.7765 | 0.2463 | 41.1345 | 15.6654 | 37.3921 | 37.4082 |
| 0.1234 | 1.85 | 12000 | 12.1512 | 0.2444 | 41.134 | 15.7004 | 37.3621 | 37.3993 |
| 0.1193 | 1.92 | 12500 | 11.6831 | 0.2466 | 40.568 | 15.1806 | 37.0715 | 37.0779 |
| 0.1148 | 2.0 | 13000 | 11.6546 | 0.2482 | 41.0991 | 15.4567 | 37.4898 | 37.5136 |
| 0.0836 | 2.08 | 13500 | 12.0708 | 0.2717 | 40.4842 | 15.0195 | 36.8428 | 36.859 |
| 0.0869 | 2.15 | 14000 | 12.0069 | 0.2731 | 40.6828 | 14.8559 | 36.8299 | 36.8515 |
| 0.0846 | 2.23 | 14500 | 12.02 | 0.2727 | 40.1785 | 14.8884 | 36.7155 | 36.7025 |
| 0.0829 | 2.31 | 15000 | 12.0535 | 0.2756 | 40.9047 | 15.2085 | 37.1447 | 37.1153 |
| 0.0855 | 2.38 | 15500 | 12.0346 | 0.2757 | 40.8628 | 14.9646 | 37.068 | 37.0583 |
| 0.0859 | 2.46 | 16000 | 11.8796 | 0.2762 | 40.924 | 15.2223 | 37.1443 | 37.1329 |
| 0.0847 | 2.54 | 16500 | 11.9292 | 0.2786 | 40.9447 | 15.2269 | 37.1398 | 37.1511 |
| 0.0831 | 2.62 | 17000 | 12.0958 | 0.2770 | 40.417 | 14.7542 | 36.6568 | 36.6345 |
| 0.0828 | 2.69 | 17500 | 11.845 | 0.2796 | 40.7295 | 15.0389 | 36.9957 | 36.9706 |
| 0.0782 | 2.77 | 18000 | 11.9369 | 0.2796 | 40.7406 | 15.1238 | 36.9906 | 36.9817 |
| 0.0798 | 2.85 | 18500 | 11.9869 | 0.2792 | 40.4692 | 15.0458 | 36.8005 | 36.7953 |
| 0.0794 | 2.92 | 19000 | 11.8985 | 0.2792 | 40.497 | 15.1883 | 36.8923 | 36.8945 |
| 0.0793 | 3.0 | 19500 | 11.92 | 0.2789 | 40.4777 | 15.156 | 36.8755 | 36.8813 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
IsThatYouCarl/lora_model | IsThatYouCarl | 2024-05-20T17:37:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-19T16:51:28Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** IsThatYouCarl
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ZovutVanya/ruT5-EmotionNeutralization | ZovutVanya | 2024-05-20T17:37:20Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"ru",
"base_model:ai-forever/ruT5-base",
"base_model:finetune:ai-forever/ruT5-base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-14T12:51:28Z | ---
base_model: ai-forever/ruT5-base
tags:
- generated_from_trainer
model-index:
- name: ruT5-EmotionNeutralization
results: []
language:
- ru
license: mit
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [ai-forever/ruT5-base](https://huggingface.co/ai-forever/ruT5-base) on an anonymized Emergency calls dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3848
- ParaScore: 0.8265
## Model description
More information needed
## Intended uses & limitations
Neutralization of emotional speech
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Tokenizers 0.19.1 |
sgarrett/test_4 | sgarrett | 2024-05-20T17:30:21Z | 134 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:nferruz/ProtGPT2",
"base_model:finetune:nferruz/ProtGPT2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T17:22:16Z | ---
license: apache-2.0
base_model: nferruz/ProtGPT2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_output_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_output_2
This model is a fine-tuned version of [nferruz/ProtGPT2](https://huggingface.co/nferruz/ProtGPT2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 11.1877
- Accuracy: 0.4684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200.0
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
tezcan/Kocdigital-LLM-8b-v0.1-Q4_K_M-GGUF | tezcan | 2024-05-20T17:24:35Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"tr",
"license:llama3",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T17:24:20Z | ---
language:
- tr
license: llama3
tags:
- llama-cpp
- gguf-my-repo
model-index:
- name: Kocdigital-LLM-8b-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge TR
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc
value: 44.03
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag TR
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc
value: 46.73
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU TR
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.11
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA TR
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: acc
value: 48.21
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande TR
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 10
metrics:
- type: acc
value: 54.98
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k TR
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.78
name: accuracy
---
# tezcan/Kocdigital-LLM-8b-v0.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`KOCDIGITAL/Kocdigital-LLM-8b-v0.1`](https://huggingface.co/KOCDIGITAL/Kocdigital-LLM-8b-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/KOCDIGITAL/Kocdigital-LLM-8b-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo tezcan/Kocdigital-LLM-8b-v0.1-Q4_K_M-GGUF --model kocdigital-llm-8b-v0.1.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo tezcan/Kocdigital-LLM-8b-v0.1-Q4_K_M-GGUF --model kocdigital-llm-8b-v0.1.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m kocdigital-llm-8b-v0.1.Q4_K_M.gguf -n 128
```
|
feysahin/Reinforce-CartPole-v1 | feysahin | 2024-05-20T17:24:22Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-20T17:24:11Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
maneln/tiny-llama | maneln | 2024-05-20T17:22:53Z | 138 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T17:00:25Z | ---
license: apache-2.0
---
|
MrBlackSheep/BOOBS_MIX_inpainting | MrBlackSheep | 2024-05-20T17:22:37Z | 2 | 0 | diffusers | [
"diffusers",
"checkpoint",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] | image-to-image | 2024-02-06T18:07:28Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: image-to-image
tags:
- checkpoint
---
### Model Description
**Inpaint model** for BOOBS MIX checkpoint, made for realistic style and celebrity models.
- **Developed by:** MrBlackSheep
- **Model type:** Checkpoint **Inpaint model**
- **License:** creativeml-openrail-m
 |
MrBlackSheep/BOOBS_MIX_Pruned.inpainting | MrBlackSheep | 2024-05-20T17:20:34Z | 9 | 0 | diffusers | [
"diffusers",
"checkpoint",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] | image-to-image | 2024-04-10T12:13:46Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: image-to-image
tags:
- checkpoint
---
### Model Description
Pruned **Inpaint model** for BOOBS MIX checkpoint, made for realistic style and celebrity models.
- **Developed by:** MrBlackSheep
- **Model type:** Checkpoint **Inpaint model** **(Pruned version)**
- **License:** creativeml-openrail-m
 |
onionLad/identifier-deberta | onionLad | 2024-05-20T17:16:54Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"token-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-20T16:27:17Z | ---
license: apache-2.0
language:
- en
---
# identifier-deberta
This model is a fine-tuned version of microsoft/deberta-v3-base on data derived from the PLABA dataset. Training was performed over 3 epochs
with learning rate 2e-5. The model achieves the following performance:
- Validation Loss: 0.112134
- Precision: 0.455793
- Recall: 0.379442
- F1: 0.414127
- Accuracy: 0.961042 |
farzanrahmani/AriaBERT_finetuned_digimag_Epoch_3_lr_2e_5_unfreezed | farzanrahmani | 2024-05-20T17:16:20Z | 109 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T17:15:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sibozhu/cp_intent_model | sibozhu | 2024-05-20T17:16:05Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-07T06:35:21Z | ---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: cp_intent_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cp_intent_model
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 150 | 0.0020 | 1.0 |
| No log | 2.0 | 300 | 0.0203 | 0.995 |
| No log | 3.0 | 450 | 0.0005 | 1.0 |
| 0.0321 | 4.0 | 600 | 0.0003 | 1.0 |
| 0.0321 | 5.0 | 750 | 0.0003 | 1.0 |
| 0.0321 | 6.0 | 900 | 0.0002 | 1.0 |
| 0.0004 | 7.0 | 1050 | 0.0002 | 1.0 |
| 0.0004 | 8.0 | 1200 | 0.0002 | 1.0 |
| 0.0004 | 9.0 | 1350 | 0.0002 | 1.0 |
| 0.0002 | 10.0 | 1500 | 0.0002 | 1.0 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Subsets and Splits