Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
reinforcement-learning | null |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-CartPoleV1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "133.80 +/- 10.01", "name": "mean_reward", "verified": false}]}]}]} | Joalbom14/Reinforce-CartPoleV1 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-05-02T02:08:39+00:00 |
null | gpt-neox |
# joeshmoethefunnyone/pythia-70m-Q8_0-GGUF
This model was converted to GGUF format from [`EleutherAI/pythia-70m`](https://huggingface.co/EleutherAI/pythia-70m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/EleutherAI/pythia-70m) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo joeshmoethefunnyone/pythia-70m-Q8_0-GGUF --model pythia-70m.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo joeshmoethefunnyone/pythia-70m-Q8_0-GGUF --model pythia-70m.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pythia-70m.Q8_0.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "library_name": "gpt-neox", "tags": ["pytorch", "causal-lm", "pythia", "llama-cpp", "gguf-my-repo"], "datasets": ["EleutherAI/pile"]} | joeshmoethefunnyone/pythia-70m-Q8_0-GGUF | null | [
"gpt-neox",
"gguf",
"pytorch",
"causal-lm",
"pythia",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:EleutherAI/pile",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T02:09:06+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | disroom/test_gemma-1.1-7b-it-0.01M | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:11:26+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** jurieyel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | jurieyel/77hr-tuned-Llama3-8b-4bit-GGUF-q4_K_M | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:14:12+00:00 |
null | null | {} | wuzhongyanqiu/repo_name | null | [
"region:us"
] | null | 2024-05-02T02:14:31+00:00 |
|
null | null | {} | skumar310/NER-TASK | null | [
"region:us"
] | null | 2024-05-02T02:15:33+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: JackFram/llama-68m
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: /data/data/final_set_cleaned/train/
type: sharegpt
conversation: chatml
- path: /data/data/map_coig_cqia.jsonl
type: sharegpt
conversation: chatml
- path: /data/data/ruozhiba.jsonl
type: sharegpt
conversation: chatml
dataset_prepared_path: last_run_prepared
val_set_size: 0
output_dir: ./out
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 4
num_epochs: 2
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
evals_per_epoch: 0
eval_table_size:
saves_per_epoch: 4
debug:
deepspeed: deepspeed/zero2.json
weight_decay: 0.0
fsdp:
fsdp_config:
default_system_message: "You are a helpful assistant."
special_tokens:
eos_token: "<|im_end|>"
pad_token: "<|end_of_text|>"
tokens:
- "<|im_start|>"
- "<|im_end|>"
```
</details><br>
# data/llama-68m-20240502-0037
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 6
- gradient_accumulation_steps: 8
- total_train_batch_size: 192
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "JackFram/llama-68m", "model-index": [{"name": "data/llama-68m-20240502-0037", "results": []}]} | lu-vae/llama-68m-fft | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T02:16:17+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# toxic_generator_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3880
- Accuracy: 0.4552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40.0
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "gpt2", "model-index": [{"name": "toxic_generator_model", "results": []}]} | theastronuts/toxic_sentence_generator | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T02:16:55+00:00 |
null | transformers | {} | da-xiong/tku2024 | null | [
"transformers",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2024-05-02T02:18:05+00:00 |
|
null | null | {"license": "mit"} | wuzhongyanqiu/firstmodel | null | [
"license:mit",
"region:us"
] | null | 2024-05-02T02:18:51+00:00 |
|
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | astro21/pix2struct-base-coco-f | null | [
"transformers",
"safetensors",
"pix2struct",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:19:03+00:00 |
null | null | # Laura - Un sistema neuronal rรกpido y local de texto a voz que suena genial.

Bienvenido al repositorio del modelo de texto a voz Laura, basado en la arquitectura ONNX Runtime. Este modelo ha sido entrenado utilizando Piper, una plataforma de aprendizaje automรกtico de รบltima generaciรณn.
## Acerca de Laura
Laura es un pequeรฑo modelo de texto a voz diseรฑado para ofrecer una conversiรณn precisa y natural de texto a voz. Se basa en la arquitectura ONNX Runtime para un rendimiento รณptimo y eficiente. El modelo ha sido entrenado usando Piper, asegurando un alto nivel de calidad en la conversiรณn de texto a voz.
## Caracterรญsticas
- **Arquitectura ONNX Runtime**: Proporciona un rendimiento eficiente y de alta calidad.
- **Entrenado con Piper**: Garantiza una conversiรณn de texto a voz precisa y natural.
- **Fรกcil de usar**: Simple de integrar en tus proyectos de texto a voz.
## Cรณmo probar Laura
Puedes probar el modelo Laura accediendo a [tts.hircoir.eu.org](http://tts.hircoir.eu.org). Allรญ podrรกs experimentar la calidad de la conversiรณn de texto a voz que ofrece Laura.
# Licencia de uso para el modelo de voz
Esta licencia establece las condiciones bajo las cuales se permite el uso de este modelo de voz. Al utilizar este modelo de voz, aceptas cumplir con estas condiciones.
## Permisos
- **Uso general**: Se permite el uso del modelo de voz para cualquier propรณsito.
- **Crรฉdito**: Dar crรฉdito al creador del modelo es opcional, pero se agradece.
## Restricciones
- **Prohibido el uso en servicios de pago**: Estรก estrictamente prohibido utilizar el modelo de voz en servicios de conversiรณn de texto a voz que sean de pago o en cualquier otro servicio que implique un cobro a los usuarios por el uso de este modelo de voz.
- **No se permite el uso comercial**: No estรก permitido utilizar el modelo de voz de manera comercial si el uso implica un pago de parte de los usuarios finales.
## Propiedad intelectual
- Esta licencia no concede ningรบn derecho de propiedad intelectual sobre el modelo de voz. Todos los derechos de autor y propiedad intelectual sobre el modelo de voz estรกn reservados por el creador.
## Incumplimiento
El incumplimiento de estas condiciones puede resultar en la revocaciรณn de tu derecho a usar el modelo de voz.
## Contacto
Si tienes alguna pregunta o inquietud acerca de esta licencia o su uso, puedes ponerte en contacto con el creador.
---
Al usar este modelo de voz, aceptas cumplir con todas las condiciones anteriores. Gracias por respetar estas pautas. | {"language": ["es"], "license": "cc-by-nc-4.0", "tags": ["piper", "onnx", "HirCoir", "es_MX"]} | HirCoir/Piper-TTS-Laura | null | [
"onnx",
"piper",
"HirCoir",
"es_MX",
"es",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-05-02T02:20:38+00:00 |
null | null | {} | Phanh2532/GAMA-Code-generator-v2.0 | null | [
"region:us"
] | null | 2024-05-02T02:21:29+00:00 |
|
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Luluuu/0502_T5_baseline_checkpoint_1000 | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T02:24:10+00:00 |
null | null | {} | apez1267/Lily-Cybersecurity-7B-v0.2-GGUF | null | [
"gguf",
"region:us"
] | null | 2024-05-02T02:24:29+00:00 |
|
reinforcement-learning | transformers |
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="pkbiswas//tmp/tmppe80ooeh/pkbiswas/Phi-3-Detoxified-PPO-LoRa")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("pkbiswas//tmp/tmppe80ooeh/pkbiswas/Phi-3-Detoxified-PPO-LoRa")
model = AutoModelForCausalLMWithValueHead.from_pretrained("pkbiswas//tmp/tmppe80ooeh/pkbiswas/Phi-3-Detoxified-PPO-LoRa")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
| {"license": "apache-2.0", "tags": ["trl", "ppo", "transformers", "reinforcement-learning"]} | pkbiswas/Phi-3-Detoxified-PPO-LoRa | null | [
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:25:31+00:00 |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "beomi/KoAlpaca-Polyglot-5.8B"} | lee0901/synerEngine_v1 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:beomi/KoAlpaca-Polyglot-5.8B",
"region:us"
] | null | 2024-05-02T02:27:10+00:00 |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐งจ diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | rubbrband/aingdiffusionXL_v125 | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | 2024-05-02T02:27:16+00:00 |
null | null | {} | huynq3Cyradar/bert-large-finetuned-phishing-webpage-version | null | [
"region:us"
] | null | 2024-05-02T02:29:30+00:00 |
|
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tsuneakikato/bert-base-japanese-v3-jnli | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:29:44+00:00 |
null | transformers |
# Azazelle/L3-RP_io-Q6_K-GGUF
This model was converted to GGUF format from [`Azazelle/L3-RP_io`](https://huggingface.co/Azazelle/L3-RP_io) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Azazelle/L3-RP_io) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Azazelle/L3-RP_io-Q6_K-GGUF --model l3-rp_io.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Azazelle/L3-RP_io-Q6_K-GGUF --model l3-rp_io.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m l3-rp_io.Q6_K.gguf -n 128
```
| {"license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["ResplendentAI/Aura_Uncensored_l3_8B", "meta-llama/Meta-Llama-3-8B-Instruct", "ResplendentAI/Kei_Llama3_8B", "Undi95/Llama-3-Unholy-8B", "vicgalle/Roleplay-Llama-3-8B"]} | Azazelle/L3-RP_io-Q6_K-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:ResplendentAI/Aura_Uncensored_l3_8B",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:ResplendentAI/Kei_Llama3_8B",
"base_model:Undi95/Llama-3-Unholy-8B",
"base_model:vicgalle/Roleplay-Llama-3-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:29:54+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** hamiejuice
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | hamiejuice/annotated-1poc | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:30:16+00:00 |
audio-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5364
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9223 | 1.0 | 113 | 1.8358 | 0.49 |
| 1.1658 | 2.0 | 226 | 1.2417 | 0.62 |
| 0.9584 | 3.0 | 339 | 1.0307 | 0.71 |
| 0.7216 | 4.0 | 452 | 0.8311 | 0.72 |
| 0.4906 | 5.0 | 565 | 0.6832 | 0.83 |
| 0.4631 | 6.0 | 678 | 0.5772 | 0.84 |
| 0.2003 | 7.0 | 791 | 0.5641 | 0.84 |
| 0.099 | 8.0 | 904 | 0.5368 | 0.83 |
| 0.1449 | 9.0 | 1017 | 0.5270 | 0.82 |
| 0.0776 | 10.0 | 1130 | 0.5364 | 0.81 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["marsyas/gtzan"], "metrics": ["accuracy"], "base_model": "ntu-spml/distilhubert", "model-index": [{"name": "distilhubert-finetuned-gtzan", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "GTZAN", "type": "marsyas/gtzan", "config": "all", "split": "train", "args": "all"}, "metrics": [{"type": "accuracy", "value": 0.81, "name": "Accuracy"}]}]}]} | Gunnika/distilhubert-finetuned-gtzan | null | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:31:49+00:00 |
automatic-speech-recognition | transformers | {} | darianovoselova/wav2vec2-timit | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:32:21+00:00 |
|
object-detection | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | NekoJojo/DETRFaceDetection | null | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:32:32+00:00 |
null | null | {} | sm09-dev/adb | null | [
"region:us"
] | null | 2024-05-02T02:33:06+00:00 |
|
text-classification | transformers | {"language": ["vi"], "license": "apache-2.0", "datasets": ["unicamp-dl/mmarco"], "pipeline_tag": "text-classification", "widget": [{"text": "UIT l\u00e0 g\u00ec. Tr\u01b0\u1eddng \u0110\u1ea1i h\u1ecdc C\u00f4ng ngh\u1ec7 Th\u00f4ng tin (ti\u1ebfng Anh: University of Information Technology \u2013 UIT) l\u00e0 m\u1ed9t trung t\u00e2m h\u00e0ng \u0111\u1ea7u v\u1ec1 nghi\u00ean c\u1ee9u khoa h\u1ecdc.", "output": [{"label": "LABEL_0", "score": 0.988}]}]} | itdainb/vietnamese-cross-encoder | null | [
"transformers",
"onnx",
"safetensors",
"roberta",
"text-classification",
"vi",
"dataset:unicamp-dl/mmarco",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:34:03+00:00 |
|
text-generation | transformers | # nbeerbower/flammen23-mistral-7B AWQ
- Model creator: [nbeerbower](https://huggingface.co/nbeerbower)
- Original model: [flammen23-mistral-7B](https://huggingface.co/nbeerbower/flammen23-mistral-7B)

## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/flammen23-mistral-7B-AWQ"
system_message = "You are flammen23-mistral-7B, incarnated as a powerful AI. You were created by nbeerbower."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "datasets": ["nbeerbower/bible-dpo"], "base_model": ["flammenai/flammen22X-mistral-7B"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/flammen23-mistral-7B-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"dataset:nbeerbower/bible-dpo",
"base_model:flammenai/flammen22X-mistral-7B",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T02:34:34+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TeeZee/GALAXY-XB-v.03
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q2_K.gguf) | Q2_K | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.IQ3_XS.gguf) | IQ3_XS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q3_K_S.gguf) | Q3_K_S | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.IQ3_S.gguf) | IQ3_S | 7.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.IQ3_M.gguf) | IQ3_M | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q3_K_M.gguf) | Q3_K_M | 7.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q3_K_L.gguf) | Q3_K_L | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.IQ4_XS.gguf) | IQ4_XS | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q4_K_S.gguf) | Q4_K_S | 9.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q4_K_M.gguf) | Q4_K_M | 9.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q5_K_S.gguf) | Q5_K_S | 11.1 | |
| [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q5_K_M.gguf) | Q5_K_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q6_K.gguf) | Q6_K | 13.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q8_0.gguf) | Q8_0 | 17.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "TeeZee/GALAXY-XB-v.03", "no_imatrix": "nan-abort", "quantized_by": "mradermacher"} | mradermacher/GALAXY-XB-v.03-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:TeeZee/GALAXY-XB-v.03",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:36:00+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | lsr42/epic | null | [
"transformers",
"safetensors",
"EPIC",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:36:36+00:00 |
null | null | {"license": "llama3"} | tosaddler/Llama3-OpenBioLLM-70B-GGUF | null | [
"license:llama3",
"region:us"
] | null | 2024-05-02T02:37:12+00:00 |
|
unconditional-image-generation | diffusers |
# Model Card for Unit 1 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute ๐ฆ.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('ericpotter/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
| {"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]} | ericpotter/sd-class-butterflies-32 | null | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-05-02T02:37:14+00:00 |
null | null | {"license": "openrail"} | rieviee/Kendji | null | [
"license:openrail",
"region:us"
] | null | 2024-05-02T02:37:29+00:00 |
|
null | null | {"license": "mit"} | cieruinfile/ev | null | [
"license:mit",
"region:us"
] | null | 2024-05-02T02:37:41+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cody628/t5-large_PREFIX_TUNING_SEQ2SEQ | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:37:50+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | vagarwal97/Mistral-7B-Instruct-FT-YTcomments | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:38:05+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shawgpt-ft
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.5927 | 0.9231 | 3 | 3.9598 |
| 4.0453 | 1.8462 | 6 | 3.4396 |
| 3.4729 | 2.7692 | 9 | 2.9900 |
| 2.2603 | 4.0 | 13 | 2.5591 |
| 2.6738 | 4.9231 | 16 | 2.3232 |
| 2.3752 | 5.8462 | 19 | 2.1372 |
| 2.1629 | 6.7692 | 22 | 2.0113 |
| 1.5507 | 8.0 | 26 | 1.9372 |
| 1.9905 | 8.9231 | 29 | 1.8902 |
| 1.3794 | 9.2308 | 30 | 1.8743 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "shawgpt-ft", "results": []}]} | vagarwal97/shawgpt-ft | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T02:38:07+00:00 |
text-generation | transformers |
# Uploaded model
- **Developed by:** Lawlux
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Meta-Llama-3-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"} | Lawlux/Lixo-Meta-Llama-3-8B-Instruct | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:38:39+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | lsr42/sparta | null | [
"transformers",
"safetensors",
"SPARTA",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:40:00+00:00 |
null | null | {"license": "openrail"} | modeldodel/MINYEON-MINJEONG | null | [
"license:openrail",
"region:us"
] | null | 2024-05-02T02:40:14+00:00 |
|
null | null | {"license": "mit"} | abdelazizfathy2021/doma | null | [
"license:mit",
"region:us"
] | null | 2024-05-02T02:41:01+00:00 |
|
null | null | {"license": "openrail"} | modeldodel/JIWON | null | [
"license:openrail",
"region:us"
] | null | 2024-05-02T02:41:12+00:00 |
|
null | null | {} | Isjamzz/Real | null | [
"region:us"
] | null | 2024-05-02T02:43:46+00:00 |
|
text-generation | null |
## Exllama v2 Quantizations of Hermes-2-Pro-Llama-3-8B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turboderp's ExLlamaV2 v0.0.20</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B
## Prompt format
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
```
## Available sizes
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2 Hermes-2-Pro-Llama-3-8B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch:
Linux:
```shell
huggingface-cli download bartowski/Hermes-2-Pro-Llama-3-8B-exl2 --revision 6_5 --local-dir Hermes-2-Pro-Llama-3-8B-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
huggingface-cli download bartowski/Hermes-2-Pro-Llama-3-8B-exl2 --revision 6_5 --local-dir Hermes-2-Pro-Llama-3-8B-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
| {"language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "NousResearch/Meta-Llama-3-8B", "widget": [{"example_title": "Hermes 2 Pro", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "quantized_by": "bartowski", "pipeline_tag": "text-generation", "model-index": [{"name": "Hermes-2-Pro-Llama-3-8B", "results": []}]} | bartowski/Hermes-2-Pro-Llama-3-8B-exl2 | null | [
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"text-generation",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Meta-Llama-3-8B",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T02:44:45+00:00 |
null | null | {} | Xrunner/hive-o | null | [
"region:us"
] | null | 2024-05-02T02:46:01+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | lsr42/splade | null | [
"transformers",
"safetensors",
"SPLADE",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:46:01+00:00 |
null | null | {} | eminAydin/gpt2Kaggle | null | [
"region:us"
] | null | 2024-05-02T02:46:03+00:00 |
|
token-classification | transformers | {"license": "mit"} | Minwoo-Study/AIHUB_IndonesianCorpus_NER | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:46:17+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** Samoed
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit"} | Samoed/PRGen-Phi-3-mini-4k-instruct-4bit-LoRA | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:46:30+00:00 |
null | null | {} | ericpotter/sd-class-butterflies-64 | null | [
"region:us"
] | null | 2024-05-02T02:46:42+00:00 |
|
text-generation | transformers |
# Llama3-TAIDE-LX-8B-Chat-Alpha1 - AWQ
- Model creator: [taide](https://huggingface.co/taide)
- Original model: [taide/Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1)
<!-- description start -->
## Description
This repo contains AWQ model files for [taide/Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
## Original model card
---
* [English Version](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1/blob/main/README_en.md)
# ๆจกๅ็ฐกไป
* [TAIDE่จ็ซ](https://taide.tw/index)่ดๅๆผ้็ผ็ฌฆๅๅฐ็ฃ่ช่จๅๆๅ็นๆง็็ๆๅผไบบๅทฅๆบๆ
งๅฐ่ฉฑๅผๆๆจกๅ๏ผๅๆๅปบๆงๅฏไฟกไปป็ไบบๅทฅๆบๆ
ง็ฐๅขใ็ตๅ็ขๅญธ็ ่ฝ้๏ผๆจๅๅฏไฟกไปป็ๆๅผไบบๅทฅๆบๆ
ง็็ผๅฑ๏ผๆๅๅฐ็ฃๅจๅ้็ซถ็ญไธญ็ๅฐไฝ๏ผไฟ้ฒ็ขๆฅญ็ผๅฑ๏ผ้ฟๅ
ๅฐๅคๅๆ่ก็ไพ่ณดใ
* Llama3 TAIDE ็ณปๅๆจกๅไปฅ Meta ๅ
ฌๅธ้ๅบ็ [LLaMA3-8b](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/) ็บๅบ็ค๏ผๅฐๅ
ฅๅฐ็ฃไธๅ้ ๅๅฏ็จ็ๆๆฌ่่จ็ทด็ด ๆ๏ผๆ้ซๆจกๅๅจๆญฃ้ซไธญๆๅๆ็่ฝๅ่็นๅฎไปปๅ็่กจ็พใๅ
ฌ้้ๅบ็ๆจกๅๅฆไธ:
* [Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1): ไปฅ LLaMA3-8b ็บๅบ็ค๏ผไฝฟ็จ็น้ซไธญๆ่ณๆ้ ่จ็ทด (continuous pretraining)๏ผไธฆ้้ๆไปคๅพฎ่ชฟ(instruction tuning)ๅผทๅ่พฆๅ
ฌๅฎคๅธธ็จไปปๅๅๅค่ผชๅ็ญๅฐ่ฉฑ่ฝๅ๏ผ้ฉๅ่ๅคฉๅฐ่ฉฑๆไปปๅๅๅฉ็ไฝฟ็จๆ
ๅขใLlama3-TAIDE-LX-8B-Chat-Alpha1 ๅฆๅคๆๆไพ[4 bit ้ๅๆจกๅ](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1-4bit)๏ผ้ๅๆจกๅไธป่ฆๆฏๆไพไฝฟ็จ่
็ไพฟๅฉๆง๏ผๅฏ่ฝๆๅฝฑ้ฟๆ่ฝ่ๆดๅคไธๅฏ้ ๆ็ๅ้ก๏ผ้่ซไฝฟ็จ่
็่งฃ่ๆณจๆใ
# ๆจกๅๅๆธ
* ๅๆธ้: 8B
* ๆๅคงๅ
งๅฎน้ทๅบฆ (context length): 8K
* ็นไธญ่จ็ทด่ณๆ token ้: 43B
* ่จ็ทดๆ้: 2336 H100 GPU Hours
# ็น่ฒ
* ๅดๆ ผๆ้ๆจกๅ็่จ็ทด่ณๆ๏ผๆๅๆจกๅ็ๆ่ณๆ็ๅฏไฟกไปปๆงๅ้ฉ็จๆง
* ้ๅฐ่ชๅๆ่ฆใๅฏซไฟกใๅฏซๆ็ซ ใไธญ็ฟป่ฑใ่ฑ็ฟปไธญ็ญ่พฆๅ
ฌๅฎคๅธธ็จไปปๅๅๅ ๅผท
* ้ๅฐๅฐ็ฃๅจๅฐๆๅใ็จ่ชใๅๆ
็ญ็ฅ่ญๅๅ ๅผท
* ๅ
ทๅๅค่ผชๅ็ญๅฐ่ฉฑ่ฝๅ
# ๆ็จ็ฏไพ
| ไปปๅ | ไฝฟ็จ่
่ผธๅ
ฅ | ๆจกๅ่ผธๅบ |
| --- | -------- | ------- |
| ๅฏซๆ็ซ | ่ซไปฅไปฅไธๅ
งๅฎน็บๅบ็ค๏ผๅฏซไธ็ฏๆ็ซ ๏ผๆฐๅฏซไธ็ฏไฝๆ๏ผ้ก็ฎ็บใไธๅผต่็
ง็ใ๏ผๅ
งๅฎน่ฆๆฑ็บ๏ผ้ธๆไธๅผตไปคไฝ ๅฐ่ฑกๆทฑๅป็็
ง็๏ผ่ชชๆไปคไฝ ๅฐ่ฑกๆทฑๅป็ๅๅ ๏ผไธฆๆ่ฟฐ็
ง็ไธญ็ๅฝฑๅๅ่ๅพ็ๆ
ไบใ่จ้ๆ้ท็้็จใ่ไปไบบ็ๆ
ๆฏใ็ฐๅข่ฎ้ทๅ็พ้บ็ๆฏ่ฒใ | ไธๅผต่็
ง็๏ผๆฟ่ผ่ๆๅ
็็่ทกๅๅๆถ็ๆบซๅบฆใๆ้่จๅพ้ฃๅผต็
ง็ๆฏๆ้บผๅบ็พ็๏ผ้ฃๆฏไธๅผตๆณ้ป็้ป็ฝ็ธ็๏ผ่ฃฑๅจ็ไธ๏ผ่จด่ชช่ไธๆฎตๅฑฌๆผๆๅฎถๆ็ๆ
ไบใ<br><br>้ฃๅผต็
ง็ไธญ๏ผๆ็็ฅ็ถๆฏ็ซๅจ็ฐ้้๏ผๅพๆนๆฏ็ถฟๅปถ็ๅฑฑๅท๏ผๅๆนๅๆฏไธ็้้ป็็จป็ฐใ้ๅน
ๆฏ่ฑกไปคๆๅฐ่ฑกๆทฑๅป๏ผๅ ็บ้ๆฏๆ็็ซฅๅนดๅๆถ๏ผไนๆฏๆๆ้ท้็จไธญๆ็พๅฅฝ็้ขจๆฏ็ทใ<br><br>ๆๅบ็ๅจๅฐ็ฃ้ไธ๏ผๅพๅฐๅฐฑ็ๆดปๅจ้ๆจฃไธๅ็ดๆจธ็็ฐๅขไธญใๆฏ็ถๆ่ท่ๅฎถไบบๅจ็ฐ่ฃกๅทฅไฝๆ๏ผๅฐฑๆ็ๅฐ้ๆจฃ็็พๆฏใ้้ป็็จป็ฉ้จ่ๅพฎ้ขจๆๆณ๏ผๅฑฑๅทๅ้้ๅฐ็็ซๅจ้ ๆน๏ผ็ตฆไบบไธ็จฎๅฎ่ฉณ่็ฅฅๅ็ๆ่ฆบใ<br><br>้ฃๅผต็
ง็ๆๆๆผๆฐๅ50ๅนดไปฃ๏ผ็ถๆๅฐ็ฃ็ถๆฟๅฐๆช่ตท้ฃ๏ผ่พฒๆฅญ็บไธป่ฆ็ขๆฅญใๆ็็ฅ็ถๆฏ้ฝๆฏ่พฒๆฐ๏ผไปๅ่พๅค่็จฎ๏ผ็บๆๅๅฎถๆไพๆบซ้ฃฝใๆฏ็ถๆ็ๅฐ้ฃๅผต็
ง็๏ผๅฐฑๆๆณ่ตทไปๅ็บๆๅๆๅ็็ง็ฒๅๅชๅใ<br><br>้คไบ็พ้บ็้ขจๆฏๅค๏ผ็
ง็ไธญ้ๆไธๆฎตๅไบบ็ๆ
ไบใๅจๆ้ๅพๅฐ็ๆๅ๏ผๅฎถ่ฃก็ผ็ไบไธๅ ดๅคง็ซ๏ผๅฐๆๅ็ๆฟๅญๅๆๆ่ฒก็ข้ฝ็ๅ
ไบใๅนธๅฅฝๅ
จๅฎถไบบ้ฝๅนณๅฎ็กไบ๏ผไฝ็ๆดปๅป่ฎๅพๅๅๅฐ่ฆใๆๅ่ขซ่ฟซๆฌๅฐๅฆไธๅๆ่๏ผๅจ้ฃ่ฃก้ๆฐ้ๅง็ๆดปใ<br><br>้ฃๅผต็
ง็ไน่จ้ไบ้ฃๆฎต่ฑ่พ็ๆญฒๆใ้็ถๆๅๅคฑๅปไบ่จฑๅคๆฑ่ฅฟ๏ผไฝๅฎถไบบไน้็ๆๆ
ๅปๆดๅ ๆทฑๅใๆๅไบ็ธๆถๆ๏ผๅ
ฑๅ้ขๅฐ็ๆดปไธ็็จฎ็จฎๆๆฐใ<br><br>ๅฆไป๏ผๆๅทฒ้ทๅคงๆไบบ๏ผๅจ้ฝๅธ่ฃกๅทฅไฝใไฝๆฏ็ถๆ็ๅฐ้ฃๅผต่็
ง็๏ผๆ้ฝๆๆณ่ตท้ฃๆฎต็ฐกๅฎ่็พๅฅฝ็ๆๅ
ใ้ฃไบ็พ้บ็้ขจๆฏใ่ฆชไบบ็็ฌๅฎน๏ผไปฅๅ้ฃๆฎตๅ
ๆปฟๆๆฐๅปๅๅ
ๆปฟๅธๆ็ๆฅๅญ๏ผ้ฝๅฐๆฐธ้ ้ซๅปๅจๆ็่จๆถๆทฑ่ใ<br><br>้ๅผต่็
ง็๏ผไธๅ
ๅ
ๆฏไธๅผต็
ง็๏ผๆดๆฏไธๅๅฏถ่ฒด็ๅฎถๆๅฒ๏ผไธๅ้ๆผๆๅๅฆไฝๅ
ๆๅฐ้ฃใ็ๆ็พๅจ็ๆ
ไบใๆๅฐๆ้ๅผต็
ง็ๅฅฝๅฅฝไฟๅญไธไพ๏ผ่ฎๅพไปฃไน่ฝๅค ่ช่ญๆๅ็ๆ นๆบ๏ผไธฆไธ้่จ้ฃไบๅผๅพ็่็็พๅฅฝๅๆถใ |
| ๅฏซไฟก | ่ซไปฅๅ็็ถ็็่บซไปฝ๏ผ็ตฆๅปฃๅๅ
ฌๅธ็ๅตๆ็ธฝ็ฃๅฏซไธๅฐไฟก๏ผๆๅบๅฐๆผๆฐ็ขๅๅปฃๅๅฎฃๅณๆดปๅ็ๅตๆๅปบ่ญฐใ | [ๆจ็ๅงๅ]<br>[ๆจ็่ท็จฑ]<br>[ๆจ็ๅ
ฌๅธๅ็จฑ]<br>[้ตไปถๅฐๅ]<br>[้ป่ฉฑ่็ขผ]<br>[็ผๆๆฅๆ]<br><br>[ๅตๆ็ธฝ็ฃๅงๅ]<br>[ๅปฃๅๅ
ฌๅธๅ็จฑ]<br>[ๅปฃๅๅ
ฌๅธๅฐๅ]<br><br>ไธปๆจ:้ๆผๆฐ็ขๅๅฎฃๅณๆดปๅๅตๆ็ผๆณ๏ผ่ซๆฅ็
งใ<br><br>ๅฐๆฌ็[ๅตๆ็ธฝ็ฃๅงๅ]๏ผ<br><br>ๆๅธๆ้ๅฐไฟก่ฝๆพๅฐไฝ ๅฟ็ขไธญๅฅฝๅฅฝไผๆฏใๆๅฏซ้ๅฐไฟก๏ผๆฏ็บไบๅไบซๆๅฐๆผๆๅๅณๅฐๆจๅบ็ๆฐ็ขๅๅฎฃๅณๆดปๅไธไบๅตๆ็ผๆณใๆๆทฑไฟก๏ผ้ไบๆณๆณ่ฝๅนซๅฉๆๅๆฐ็ขๅๅจๅธๅ ดไธ่ซ็ฉ่ๅบ๏ผๅธๅผๆถ่ฒป่
็็ฎๅ
ใ<br><br>้ฆๅ
๏ผๆๅปบ่ญฐๆๅๅฏไปฅๅฉ็จ็คพ็พคๅช้ซๅนณๅฐ๏ผๅๆฏFacebookใInstagramใYouTube็ญ๏ผ่ฃฝไฝไธ็ณปๅๅผไบบๅ
ฅๅ็็ญ็ๅ่ฒผๆใ้ไบๅ
งๅฎนๅฏไปฅๅฑ็พๆฐ็ขๅ็็น่ฒๅๅช้ป๏ผๅๆไน่ฝๅณ้ๅบๆๅๅ็้่ฆ็ฐไฟใๅฅๅบท็ๆ ธๅฟๅนๅผใๆๅๅฏไปฅ่็ถฒ็ด
ๆๆ่ฆ้ ่ขๅไฝ๏ผ่ฎไปๅ่ฉฆ็จไธฆๅไบซไปๅ็ไฝฟ็จ็ถ้ฉ๏ผ้ๅฐๆๅฉๆผๅขๅ ๆๅ็ขๅ็ๅฏไฟกๅบฆๅ่ฝ่ฆๅบฆใ<br><br>ๅ
ถๆฌก๏ผๆๅๅฏไปฅ่่พฆๆฐ็ขๅ่ฉฆๅ/่ฉฆ็จๆดปๅ๏ผ้่ซๆฐ็พ่ฆช่บซ้ซ้ฉๆๅ็ๆฐ็ขๅใๆๅๅฏไปฅๅจๅๅคง็พ่ฒจๅ
ฌๅธใ่ถ
ๅธใไปฅๅๆๅ็้ๅธ่่พฆ้ไบๆดปๅ๏ผ่ฎๅ่่
ๅ
่ฒป่ฉฆๅ/่ฉฆ็จๆฐ็ขๅ๏ผไธฆๅกซๅฏซๅๅทๅ้ฅๆ่ฆใ้็จฎไบๅ็ๆนๅผ๏ผไธๅ
่ฝๆถ้ๅฐๅฏถ่ฒด็ๅธๅ ดๅ้ฅ๏ผ้่ฝ่ฎๆฐ็พๅฐๆๅ็ๅ็็ข็ๆญฃ้ข็ๅฐ่ฑกใ<br><br>็ฌฌไธ๏ผๆๅๅฏไปฅ่็ถๅฐ้ค้ฃฒๆฅญ่
ๅไฝ๏ผๅฐๆๅ็ๆฐ็ขๅ็ดๅ
ฅไปๅ็่ๅฎไธญใ้ๅฐไฝฟๆๅ็็ขๅๆฅ่งธๅฐๆดๅปฃๆณ็ๅฎข็พค๏ผๅๆไน่ฝ่ญๆๆๅ็็ขๅๅจ็น้ฃชๅ็พ้ฃ็ขๆฅญไธญๅ
ทๆๅนๅผใ<br><br>็ฌฌๅ๏ผๆๅๅฏไปฅ่่พฆไธๅ ดๆฐ็ขๅ็ผ่กจๆ๏ผ้่ซๅช้ซใๆฅญ็ไบบๅฃซใไปฅๅ้่ฆๅฎขๆถๅๅ ใๅจ้ๅ ดๆดปๅไธญ๏ผๆๅๅฏไปฅๅฑ็คบๆฐ็ขๅ็ๅ้
็น่ฒๅๅชๅข๏ผไธฆ็ฑๅฐๆฅญๅปๅธซ็คบ็ฏๅฆไฝๅฐๆฐ็ขๅ่ๅ
ฅๅๅผๆ็ไธญใ้ๅฐๆฏๅๅคง็พๅฑ็คบๆๅๅ็ๅตๆฐ็ฒพ็ฅๅๅฐๆฅญ่ฝๅ็็ตไฝณๆฉๆใ<br><br>ๆๅพ๏ผๆๅปบ่ญฐๆๅๅฏไปฅ่ฃฝไฝไธ็ณปๅ็ฒพ็พ็ๅ
่ฃๅ็ฆฎ็๏ผไปฅๅธๅผๆถ่ฒป่
ๅจ็ฏๆ
ถๅ้็ฆฎๆ้ธๆๆๅ็็ขๅใ็ฒพ็ทป็ๅ
่ฃๅฐ่ฝๆๅๆๅ็ขๅ็ๅนๅผๆ๏ผๅๆไน่ฝๅณ้ๅบๆๅๅ็ๅฐ็ดฐ็ฏๅๅ่ณช็ๅ
ๆใ<br><br>ๆๆทฑไฟก๏ผ้ไบๅตๆ็ผๆณๅฐๆๅฉๆผๆๅๆฐ็ขๅๅจๅธๅ ดไธ็ฒๅพๆๅใ่ฅๆจๅฐ้ไบๆณๆณๆไปปไฝๆ่ฆๆๅปบ่ญฐ๏ผ่ซไธๅ่ๆๅไบซใๆๆๅพ
่่ๆจๅไฝ๏ผๅ
ฑๅ็บ[ๆจ็ๅ
ฌๅธๅ็จฑ]็ๆ้ทๅๆๅ่ๅชๅใ<br><br>่ฌ่ฌๆจๆฝ็ฉบ้ฑ่ฎๆ็ไฟก๏ผไธฆๆๅพ
่่ๆจ้ฒไธๆญฅ่จ่ซ้ไบๆณๆณใ<br><br>ๆฌ็ฆฎ<br><br>[ๆจ็ๅงๅ]<br>[ๆจ็่ท็จฑ]<br>[ๆจ็ๅ
ฌๅธๅ็จฑ] |
| ๆ่ฆ | ่ซๅฐ้็ฏๆ็ซ ็ฒพ็ฐกๆข็ๅ:ใ็ขๆฅญๅตๆฐๆขไพ็ฌฌ10ๆขไน2ๅ็ฌฌ72ๆขๆขๆไฟฎๆญฃๆกใไฟ็จฑใๅฐ็ๆถ็ๆณใ,้ๅฐๅๅฐ้ซใ้ปๅ่ปใ5G็ญๆ่กๅตๆฐไธๅฑ
ๅ้ไพๆ้้้ตๅฐไฝๅ
ฌๅธ,ๆไพๆ้ซ25%็ๆ็จ
ๆๆตๅชๆ ,ไผๆฅญ้ฉ็จ่ฆไปถๅ
ๅซ็ถๅนดๅบฆ็ ็ผ่ฒป็จใ็ ็ผๅฏๅบฆ้ไธๅฎ่ฆๆจก,ไธๆๆ็จ
็้ไธๅฎๆฏ็ใ<br>็บๅ ๆ็ถๆฟๅไฝๆจ็ผๅฑ็ต็น(OECD)ๅๅฎถๆไฝ็จ
่ฒ ๅถ่ชฟๆด,ๅ
ถไธญๆๆ็จ
็้ๆชป,ๆฐๅ112ๅนด่จ็บ12%,113ๅนดๆๅฐๆ้ซ่ณ15%,ไฝไปๅพๅฏฉ้
ๅ้้ๆไฝ็จ
่ฒ ๅถๅฏฆๆฝๆ
ๅฝขใ<br>็ถๆฟ้จๅฎๅก่กจ็คบ,ๅทฒๅ่ฒกๆฟ้จๅๅ้ฒๅ
ฅๆๅพ้ๆฎต,้คไผๆฅญ็ ็ผๅฏๅบฆ่จๅจ6%,็ฎๅๅทฒ็ขบ่ช,ไผๆฅญ่ณผ็ฝฎๅ
้ฒ่ฃฝ็จ็่จญๅๆ่ณ้้ก้100ๅๅ
ไปฅไธๅฏๆตๆธใ<br>่ฒกๆฟ้จๅฎๅก่กจ็คบ,็ ๅ้็จไธญ,้ๅฐๅฐ็ฃ็ขๆฅญ่ๅ
ถๅจๅ้้้กไผผ็ๅ
ฌๅธ้ฒ่กๆทฑๅ
ฅ็ ็ฉถ,ๅจ่จญๅ้จๅ,็ข็ซ้ฉ็จ็ขๅต10ไน2็ๆฅญ่
ๆฏไปฃ่กจๅฐ็ฃ้ๆใๅ้็ใ,ๆๅ
ฅ้้กไธ้100ๅๅ
,ๅฏ่ฝไนๆไธไบใ<br>่ณๆผๅๅ้ๆณจ็็ ็ผ่ฒป็จ้ๆชป,็ถๆฟ้จๅฎๅก่กจ็คบ,ๆญท็ถ่่ฒกๆฟ้จไพๅๅฏๅ่จ่ซ,็ ็ผ่ฒป็จ้ๆชปๆๆ่ฝๅจ60ๅ่ณ70ๅๅ
ไน้ใ<br>่ฒกๆฟ้จๅฎๅกๆๅบ,็ ็ผๆธ้ๅฐ็ฃๆชไพ็ถๆฟๆ้ทๅ่ฝ,้ๆชปไธ่ฝใ้ซไธๅฏๆใ,่ตทๅ้่จญๅฎๅจ100ๅๅ
,ไนๆไปฅๆ่ชฟ้,ๆญฃๆฏ็ผ่ฎไผๆฅญ่ฆบๅพๆ่พฆๆณ้ๅพๅฐ้ๆชปใ้ฒ่้ฉ็จ็ง็จ
ๅชๆ ,ๆๆๅๅ็นผ็บๆๅ
ฅ็ ็ผ,็ถญๆๅ้ไพๆ้้้ตๅฐไฝใ<br>็ถๆฟ้จๅฎๅก่กจ็คบ,ๅ ๅป ๅ็ ็ผ่ฒป็จๅนณๅ็บ30ใ40ๅๅ
,ๅ
ถไธญ,IC่จญ่จๆฅญ่
ไปๆผ30ๅ่ณ60ๅๅ
็ฏๅ,่ฅๅฐ้ๆชป่จๅจ100ๅๅ
,็ฌฆๅๆขไปถ็ๆฅญ่
่ผๅฐใๅบๆฟ่ชๅ ไธ่ถณ;ๆญคๅค,่ฅ็ฌฆๅ็ณ่ซ้ๆชป็ๆฅญ่
ๅขๅ ,ๅฐๅฏๆ้ซไผๆฅญๅจๅฐๆ่ณ้้ก,่ฒกๆฟ้จ็จ
ๆถไน่ฝๅ ๆญค็ฒๅพๆนๆณจใ<br>IC่จญ่จๆฅญ่
่ฟๆฅ้ ป้ ป้ๅฐ็ขๅต10ไน2็ผ่ฒ,ๅธๆ้ไฝ้ฉ็จ้ๆชป,ๅ ไธๅๅๅๆไพๆ้่ชไธปๅใๅ ็ขผ่ฃๅฉๅๅฐ้ซ็ขๆฅญ,็ถๆฟ้จๅฎๅก่กจ็คบ,็ถๆฟ้จๅ่ฒกๆฟ้จๅฐฑ็ขๅต10ไน2้ๆๅ
ฑ่ญ,็ญๅ่ฎๆดๅคๆฅญ่
ๅๆ ,็ผๅขๅผทไผๆฅญๆ่ณๅ้ๅ้ๅบๅฐ็ฃๆ่กๅฐไฝใ<br>่ฒกๆฟ้จๅฎๅก่กจ็คบ,็ง็จ
็ๅต็ๅถๅฎๅฟ
้ ใๆ็บๆๅฎใ,ไธฆไปฅ้ๅฐ็ๅต่จญ็ฝฎ็ฎ็็บๆ้ซๅๅ,็พ้ๆฎตๅจๆใๅๅ
ง็ใ็ไผๆฅญไปๅฏ้ฉ็จ็ขๅต็ฌฌ10ๆขใ10ไน1็็ง็จ
ๅชๆ ,ๅ
ฑๅๅฃฏๅคงๅฐ็ฃ็ถๆฟ็ผๅฑใ<br>็ถๆฟ้จๅ่ฒกๆฟ้จๆญฃๅฐฑ็ ็ผ่ฒป็จ้ๆชปๅๆๅพ็ขบ่ช,ๅพ
ไปๆๅ
ฉๅคฉ้ ๅๅญๆณไนๅพ,็ดๆ30ๅคฉๆ้,ๅฏ่ๆฅญ็้ฒไธๆญฅ่จ่ซๅ่ชฟๆด,็ผ็ขๅต10ไน2่ฝๅจ6ๆไธ่ทฏใ | ๆ็ซ ้้ปๆ่ฆๅฆไธ:<br><br>- ใๅฐ็ๆถ็ๆณใๆใ็ขๆฅญๅตๆฐๆขไพ็ฌฌ10ๆขไน2ๅ็ฌฌ72ๆขๆขๆไฟฎๆญฃๆกใ๏ผๆจๅจ้ผๅต้้ตๆ่กๅตๆฐไผๆฅญ๏ผ็นๅฅๆฏ้ฃไบๅจๅๅฐ้ซใ้ปๅ่ปใ5Gๆ่ก็ญ้ ๅไธๅจๅ้ไพๆ้ไธญๆฎๆผ้้ต่ง่ฒ่
ใ<br><br>- ไผๆฅญ่ฅ็ฌฆๅ็นๅฎๆขไปถ๏ผๅ
ๆฌ็ถๅนดๅบฆ็ ็ผ่ฒป็จๅ็ ็ผๅฏๅบฆ้ๆจ๏ผๅฏไบซๆ้ซ25%็็ๆ็จ
ๆตๆธๅชๆ ใ็ ็ผๅฏๅบฆ็ฎๆจ่จญ็บ6%๏ผ่ไผๆฅญ็บๅๅพๆตๆธ๏ผๅฟ
้ ่ญๆๅ
ถๆๆ็จ
็้ๅฐ็นๅฎๆฏ็๏ผ่ตทๅ่จญๅฎ็บ112ๅนด็12%๏ผ113ๅนดๆ้ซ่ณ15%๏ผไฝๅฏ่ฝๆไพๅ้็จ
ๅถ็ๅฏฆๆฝๆ
ๅฝข้ฒ่ก่ชฟๆดใ<br><br>- ๆฟๅบๅจๅถๅฎๆฟ็ญๆ๏ผ่้ๅฐOECDๆไฝ็จ
่ฒ ๅถ็่ชฟๆด๏ผๅ ๆญคๆ่ฆๆ
ๆณ่ชฟๆดๆๆ็จ
็้ๆชปใ<br><br>- ๆฟๅบ้จ้ๅทฒๅฐฑๆณๆกๅ
งๅฎน่ไผๆฅญๅฉ็็ธ้่
้ฒ่กๅๅ๏ผ็ขบๅฎไธไบ็ดฐ็ฏ๏ผไพๅฆไผๆฅญๆ่ณๅ
้ฒ่ฃฝ็จ่จญๅ็้ๆชป่จญ็บ100ๅๅ
ๆฐๅฐๅนฃไปฅไพๆตๆธใ<br><br>- ็ ็ผ่ฒป็จ้ๆชป็ถ้่จ่ซ๏ผๅฏ่ฝๅพๆๅๆ่ญฐ็100ๅๅ
ไธ่ชฟ่ณ60ๅ่ณ70ๅๅ
ไน้๏ผไปฅๆฟๅตๆดๅคไผๆฅญ็ฌฆๅ่ณๆ ผไธฆๆๅ
ฅ็ ็ผ๏ผ็ถญๆๅฐ็ฃๅจๅ้ไพๆ้ไธญ็้้ตๅฐไฝใ<br><br>- ็ถๆฟ้จๅฎๅก่กจ็คบ๏ผ่ชฟ้้ๆชป็็ฎ็ๆฏ่ฎๆดๅคไผๆฅญ่ฆบๅพ่ฝๅค ้ๆๆจๆบ๏ผๅพ่็ฒๅพๆ็จ็ง็จ
ๅชๆ ็ๅๅ๏ผไธฆๅฏ่ฝ่ๆญคๆ้ซๅจๅฐๆ่ณ้้ก๏ผ้ฒ่ๅขๅ ๆฟๅบ็จ
ๆถใ<br><br>- ็ฑๆผIC่จญ่จๆฅญ่
็็ ็ผ่ฒป็จๆฎ้่ผไฝ๏ผ่ฅ้ๆชป่จญๅพ้้ซ๏ผ็ฌฆๅๆขไปถ็ไผๆฅญๅฐๅพๅฐ๏ผๅ ๆญคๆๅฟ
่ฆ่ชฟๆดใ<br><br>- ๅจๅๅๆจๅไพๆ้่ชไธปๅไธฆๅขๅ ๅฐๅๅฐ้ซ็ขๆฅญ็่ฃๅฉไน้๏ผ็ถๆฟ้จๆญฃๅชๅ็ขบไฟๆดๅคไผๆฅญ่ฝๅพใ็ขๆฅญๅตๆฐๆขไพใ็ฌฌ10ๆขไน2ไธญๅ็๏ผไปฅๅขๅผทๆ่ณๅ้ๅบๅฐ็ฃ็ๆ่กๅฐไฝใ<br><br>- ๆฟๅบๅผท่ชฟ็ง็จ
ๅชๆ ็ๅถๅฎๆ็ฌฆๅ็ๅต็ฎๆจ๏ผๅจๆญคๅๆ๏ผไนๅฐ็นผ็บๆไพๅ
ถไป็ง็จ
ๅชๆ ็ตฆๅจๅๅ
ง็ถ็็ไผๆฅญ๏ผไปฅไฟ้ฒๅฐ็ฃ็ถๆฟๆ้ทใ<br><br>- ๆ็ต็็ ็ผ่ฒป็จ้ๆชปๅฐ็ฑ็ถๆฟ้จๅ่ฒกๆฟ้จๅ
ฑๅ็ขบๅฎ๏ผๅจ้ ๅ็ธ้ๅญๆณๅพ๏ผๅฐๆ็ด30ๅคฉ็ๆ้ไพๅ็่จ่ซไธฆๆๅบ่ชฟๆดๅปบ่ญฐ๏ผ็ฎๆจๆฏ่ฎ้้
ๆฐๆฟ็ญๅจ6ๆไธ่ทฏใ |
| ็ฟป่ญฏ | ไปฅไธๆไพ่ฑๆๅ
งๅฎน๏ผ่ซๅนซๆ็ฟป่ญฏๆไธญๆใDongshan coffee is famous for its unique position, and the constant refinement of production methods. The flavor is admired by many caffeine afficionados. | ๆฑๅฑฑๅๅกไปฅ็จ็น็ไฝ็ฝฎ๏ผๅ ไธไธๆท็ฒพ้ฒ็่ฃฝไฝๆๆณ่่ๅ๏ผ้ขจๅณๆดๆฏ่ฎ่จฑๅคๅๅก็ก่ถจไน่ฅ้ถฉใ |
# ไฝฟ็จ่ชชๆ
* ็ฏไพ็จๅผ
* ็ฏไพ็จๅผ่ๆไปถๅพ็บๆ้ๅบๅฐgithub
* prompt ๆจฃ็
* ไธ่ฌๅ็ญ็จๆณ
```python
chat = [
{"role": "user", "content": "{question}"},
]
prompt = tokenizer.apply_chat_template(chat)
```
* ๅฐ {question} ๆฟๆๆไฝฟ็จ่
็่ผธๅ
ฅ
* ๅ ๅ
ฅ system prompt ็็จๆณ
```python
chat = [
{"role": "system", "content": "{sys}"},
{"role": "user", "content": "{question}"},
]
prompt = tokenizer.apply_chat_template(chat)
```
* ๅฐ {sys} ๆฟๆๆๆไปค๏ผไพๅฆ๏ผไฝ ๆฏไธๅไพ่ชๅฐ็ฃ็AIๅฉ็๏ผไฝ ็ๅๅญๆฏ TAIDE๏ผๆจๆผไปฅๅฐ็ฃไบบ็็ซๅ ดๅนซๅฉไฝฟ็จ่
๏ผๆ็จ็น้ซไธญๆๅ็ญๅ้กใ
* ๅฐ {question} ๆฟๆๆไฝฟ็จ่
็ๅ้ก
* ๅค่ผชๅ็ญ็จๆณ
```python
chat = [
{"role": "system", "content": "{sys}"},
{"role": "user", "content": "{question1}"},
{"role": "assistant", "content": "{model_anwer_1}"},
{"role": "user", "content": "{question2}"},
]
prompt = tokenizer.apply_chat_template(chat)
```
* ๅฐ {sys} ๆฟๆๆๆไปค๏ผไพๅฆ๏ผไฝ ๆฏไธๅไพ่ชๅฐ็ฃ็AIๅฉ็๏ผไฝ ็ๅๅญๆฏ TAIDE๏ผๆจๆผไปฅๅฐ็ฃไบบ็็ซๅ ดๅนซๅฉไฝฟ็จ่
๏ผๆ็จ็น้ซไธญๆๅ็ญๅ้กใ
* ๅฐ {question1} ๆฟๆๆไฝฟ็จ่
็ๅ้ก1
* ๅฐ {model_anwer_1} ๆฟๆๆๆจกๅ็ๅ็ญ1
* ๅฐ {question2} ๆฟๆๆไฝฟ็จ่
็ๅ้ก2
* ๆดๅค็ดฐ็ฏ่ซๅ่[Llama3 ๆไปถ](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/)
# ่จ็ทดๆนๆณ
* ่ป็กฌ้ซ่ฆๆ ผ
* ๅ็ถฒไธญๅฟ H100
* ่จ็ทดๆกๆถ: PyTorch
* ่ณๆๅ่็
* ๅญๅ
ๆจๆบๅ
* ๅป้ค้่ฆ
* ๅป้ค้่จ
* ็ถฒ้ ่ณๆ็html tagใjavascript
* ้ๆจๆบๅญๅ
ๆไบ็ขผ
* ๅญๆธ้็ญ็ๆ็ซ
* ๅป้คๆ็ซ ไธญ็็นๅฎๆ ผๅผ๏ผๅฆ็บๆ็ๅขๅ ็ๆ่ก
* ๅป้คๅ่ณ๏ผๅฆemailใ้ป่ฉฑ
* ๅป้คไธ็ถๆๅญ๏ผๅฆ่ณญๅใ่ฒๆ
็ญ
* ๆ็บ้ ่จ็ทด (continuous pretraining, CP)
* ่ฃๅ
ๅคง้ไพๆบๅฏไฟก่ณด็็น้ซไธญๆ็ฅ่ญ
* ่ถ
ๅๆธ (hyper parameters)
* optimizer: AdamW
* learning rate: 1e-4
* batch size: 1M tokens
* epoch: 1
* ๅพฎ่ชฟ (fine tune, FT)
* ่ฎๆจกๅๅฏ้ๅฐ็น้ซไธญๆๆๅๅ็ญๅ้ก
* ่ถ
ๅๆธ (hyper parameters)
* optimizer: AdamW
* learning rate: 5e-5
* batch size: 256K tokens
* epoch: 3
# ่จ็ทด่ณๆ
* ๆ็บ้ ่จ็ทด่ณๆ(่ณๆ้็ด็บ140G)
| ่ณๆ้ | ่ณๆๆ่ฟฐ |
| --- | -------- |
| ่จด่จ่ณๆ | ใ[ๅธๆณ้ข่ฃๅคๆธ](https://judgment.judicial.gov.tw/FJUD/default.aspx)ใ่ช2013ๅนด1ๆ่ณ2023ๅนด12ๆๅ็ดๆณ้ขๆฐไบใๅไบใ่กๆฟ่จด่จ่ณๆใ |
| ไธญๅคฎ็คพ | ใ[ไธญๅคฎ็คพไธญๆๆฐ่](https://www.cna.com.tw/)ใ่ณๆ้ๅซไธญๅคฎ็คพ่ช1993ๅนด6ๆ่ณ2023ๅนด06ๆ๏ผๅ
ฑ30ๅนดไปฝไนๆฏๆฅๆฐ่ๆ็ซ ใๅ
งๅฎนๆถต่ๅๅ
งๅคๆฟๆฒปใ็คพๆใ่ฒก็ถใๆๆใ็ๆดป็ญ้ ๅใ |
| ETtoday ๆฐ่้ฒ | ใ[ETtodayๆฐ่้ฒ](https://www.ettoday.net/)ใ่ณๆ๏ผๅ
ๅซ่ช2011ๅนด10ๆ่ณ 2023ๅนด12ๆ็่ณๆใ |
| ็ซๆณ้ขๅ
ฌๅ ฑ | ใ[็ซๆณ้ขๅ
ฌๅ ฑ](https://ppg.ly.gov.tw/ppg/)ใๅ
ๅซ่ช็ฌฌ8ๅฑ็ฌฌ1ๆๆ่ณ็ฌฌ10ๅฑ็ฌฌ7ๆๆไนๅ
ฌๅ ฑ่ณๆใ |
| ๅบ็ๅ็ถฒ็ซๆธ็ฑไป็ดน | ๅ
ๅซ[ไธ้](https://www.suncolor.com.tw/)ใ[Gotop](https://www.gotop.com.tw/)ๅบ็ๅ็ถฒ็ซไธ็ๆธ็ฑ็ฐกไปใ |
| GRB ็ ็ฉถ่จ็ซๆ่ฆ | [GRB](https://www.grb.gov.tw/)็บๆถ้็ฑๆฟๅบ็ถ่ฒป่ฃๅฉไน็ ็ฉถ่จ็ซๅๅ
ถๆๆๅ ฑๅ็่ณ่จ็ณป็ตฑ๏ผๆญค่ณๆ้ไธป่ฆๆถ้ 1993ๅนด่ณ 2023ๅนดไน็ ็ฉถ่จ็ซๆ่ฆไปฅๅ็ ็ฉถๅ ฑๅๆ่ฆ๏ผๅซไธญๆๅๅ
ถ่ฑๆๅฐ็
งใ |
| ๅญธ่กๆ่ญฐ่ซๆๆ่ฆ | ๆถ้ใ[ๅญธ่กๆ่ญฐ่ซๆๆ่ฆ่ณๆๅบซ](https://sticnet.stpi.narl.org.tw/sticloc/ttscalle?meet:)ใไธญ่ช1988่ณ2009ๅนด็ฑๅฐ็ฃๆ่่พฆไนๅญธ่กๆ่ญฐ่ซๆใ |
| ๅ
่ฏ้่ช | ใ[ๅฐ็ฃๅ
่ฏ้่ช](https://www.taiwan-panorama.com/)ใๅซ่ช1993ๅนด7ๆ่ณ2023ๅนด6ๆ็ๆ็ซ ๏ผๅ
ฑ30ๅนดไปฝใๅ
งๅฎน่้ๆผๆๅๆๅใ่งๅ
่ๆฐๆ
็ญใ |
| ๆจ่ฉ็ถฒ | ใ[ๆจ่ฉ็ถฒ](https://terms.naer.edu.tw/)ใๆถต่ๆ็้ ๅ็ด187่ฌๅๅญธ่กๅ่ฉๅๅ
ถ่ญฏๅๅฐ็
งใ |
| ๅ้จๆ่ณๆ | ๅ
ๅซ่กๆฟ้ขใ[ๅๆ
็ฐกไป](https://www.ey.gov.tw/state/)ใใๆๅ้จใ[ๅๅฎถๆๅ่จๆถๅบซ](https://memory.culture.tw/)ใใๅ็ผๆใ[ๆชๆกๆฏๆดๆๅญธ็ถฒ](https://art.archives.gov.tw/index.aspx)ใใไบค้้จใ[ไบค้ๅฎๅ
จๅ
ฅๅฃ็ถฒ](https://168.motc.gov.tw/)ใ็ญ้จๆ็ถฒ็ซ่ณๆไน้จๅ่ณๆใ |
| ไปๅจๅ | ใ[ไปๅจๅ](https://www.businesstoday.com.tw/)ใ็บไธไปฅ่ฒก็ถ็บไธป็้ฑๅ้่ช๏ผๆญค่ณๆ้ๆถต่2008ๅนด1ๆ่ณ2023ๅนด7ๆ็ๆ็ซ ใ |
| ๆ่ฒ้จๅ่ช่พญๅ
ธใๆ่ช่พญๅ
ธ | ๅ
ๅซไปฅไธไธ้
่ณๆ:<br>[ๆ่ฒ้จใๆ่ชๅ
ธใ](https://dict.idioms.moe.edu.tw/search.jsp?webMd=1&la=0)๏ผๅซ5,338ๆขๆ่ช๏ผๅ
งๅฎนๅ
ๅซๆฏๆขๆ่ช็้็พฉใๅ
ธๆ
ๅๆๅๅ
ถ็ฝ่ฉฑ่ชชๆใ็จๆณ่ชชๆใไพๅฅ็ญใ<br>[ๆ่ฒ้จใ้็ทจๅ่ช่พญๅ
ธไฟฎ่จๆฌใ](https://dict.revised.moe.edu.tw/?la=0&powerMode=0)๏ผๆถ้ไธญๆๅฎๅญๅๅ้ก่พญๅฝ๏ผๅ
ๅซ่ฎ้ณใ้จ้ฆใ้็พฉ็ญ่ณ่จ๏ผๅ
ฑ็ด165,539็ญ่ณๆใ<br>[ๆ่ฒ้จใๅ่ช่พญๅ
ธ็ฐก็ทจๆฌใ](https://dict.concised.moe.edu.tw/?la=0&powerMode=0)๏ผ็บใ้็ทจๅ่ช่พญๅ
ธไฟฎ่จๆฌใ็็ฐก็ทจ็ๆฌ๏ผๅ
ฑ45,247็ญ่ณๆใ |
| ็งๆๅคง่งๅ่ณๆ | ๅซใ[็งๆๅคง่งๅ็ถฒ็ซ](https://scitechvista.nat.gov.tw/)ใไธ็็งๅญธๆฐ็ฅไปฅๅ็งๆฎๆ็ซ ใ |
| iKnow ็งๆ็ขๆฅญ่ณ่จๅฎค | ใ[็งๆ็ขๆฅญ่ณ่จๅฎค](https://iknow.stpi.narl.org.tw/)๏ผiKnow๏ผใๆไพๅฐ็ฃๅๅ
จ็็็งๆๅธๅ ด่ถจๅขใ็ญ็ฅๅๆใๅฐๅฉ็ฅ่ญ๏ผๅๆ่กไบคๆ่ณ่จ๏ผๅฐๆณจๆผ็งๆ็ขๆฅญ็ๅตๆฐ่็ผๅฑ๏ผๅ
ๅซ่ช 2008 ๅนด่ณ 2023 ๅนดใ |
| ็งๅญธ็ผๅฑๆๅ | ใ[็งๅญธ็ผๅฑๆๅ](https://ejournal.stpi.narl.org.tw/sd)ใ็บๅ็งๆ็บๆจๅปฃ็งๅญธๆ่ฒ่ๅบ็็็งๆฎๅ็ฉ๏ผๅซ่ช2004ๅนด10ๆ่ณ2020ๅนด12ๆไน็งๆฎๆ็ซ ๏ผ2021ๅนด่ตท๏ผไปฅใ[็งๆ้ญ
็ฎ](https://www.charmingscitech.nat.gov.tw/)ใๅญฃๅ้ๆฐๅบ็ผ๏ผๆไพๅ้้ๆณจ็งๆ่ญฐ้ก็ๆฐ็ฅๆ็ซ ใ |
| ๆณ่ฆ่ณๆๅบซ | ใ[ๆณ่ฆ่ณๆๅบซ](https://law.moj.gov.tw/)ใๅซๆช่ช 112 ๅนด 10 ๆๅๆฟๅบ้จ้ๆๆฐ็ผๅธไนไธญๅคฎๆณ่ฆใ๏จๆฟ่ฆๅใๆณ่ฆๅฝ๏ฆจ่ๆกๅๅฐๆน่ชๆฒปๆณ่ฆ็ญใ |
| ๅๅฐๆฟๅบๆ
้็ถฒ | ๆถต่ๅฐ็ฃ้จๅ็ธฃๅธๅฐๆนๆฟๅบ่งๅ
ๆ
้็ถฒ็ซไธไน้จๅ่ณๆใ |
| ๅๆ้ข่ชฒ็จ็ถฑ่ฆ(ๅไบๅนดๅๆ) | ๅซๅไบๅนดๅๆ่ชฒ็จ็ถฑ่ฆไน็ธฝ็ถฑไปฅๅๅ็ดๅญธๆ กไธๅ็ง็ฎไน่ชฒ็จ็ถฑ่ฆใ |
| ไธญๅคฎ็คพ่ญฏๅๆช่ณๆๅบซ | ใไธญๅคฎ็คพ่ญฏๅๆช่ณๆๅบซใ่้ไธญๅคฎ็คพๆฐ่ๆฅญๅไธ็ฟป่ญฏ้็ไธญๅคๅงๆฐใไบบๅใ็ต็นใๅฐๅ็ญ่ญฏๅๅฐ็
งใ |
| ็ซฅ่ฉฑๆธ | ๅ
ฑ 20 ๆฌ็ซฅ่ฉฑๆธ๏ผๅซๆนฏๅงๆญท้ช่จใๅฐ้ฃไฟ ใๆ้บ็ตฒๅคข้ไปๅขใ้ท่
ฟๅๅ็ญใ |
| RedPajama-Data-V2 | ๅพๅๅค้ๆพๅคๅ่ช่จ่ชๆๅบซ [RedPajama-Data-v2](https://github.com/togethercomputer/RedPajama-Data) ๅๅบ่ฑๆ่ณๆ |
| MathPile-commercial | ๅๅค้ๆพๆธๅญธ่ชๆๅบซ [MathPile-commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial) |
| ไธญๆ็ถญๅบ็พ็ง | ใ[ไธญๆ็ถญๅบ็พ็ง](https://zh.wikipedia.org/zh-tw/%E4%B8%AD%E6%96%87%E7%BB%B4%E5%9F%BA%E7%99%BE%E7%A7%91)ใๆช่ณ2023ๅนด1ๆๆๆๆข็ฎ็ๅ
งๅฎนใ |
| github-code-clean | ็บ github ้ๆบ็จๅผ็ขผ่ณๆ้๏ผๅป้คunlicense็็จๅผ็ขผๅๆไปถใ |
* ๅพฎ่ชฟ่ณๆ
* TAIDEๅ้่จ็ทดllama2็ณปๅๆจกๅไพ็ข็ๅพฎ่ชฟ่ณๆ่ณๆ๏ผ็ข็็ไปปๅๅ
ๅซไธ็็ฅ่ญใๅตๆๅฏซไฝใๆฎ้ๅธธ่ญใ็ฟป่ญฏใๆ่ฆใ็จๅผใๅฐ็ฃๅนๅผ็ญๅฎ่ผชๆๅค่ผชๅฐ่ฉฑๅ็ญๅ
ฑ 128K ็ญใๅพฎ่ชฟ่ณๆๅพ็บๆๅฐๅค้ๅบใ
# ๆจกๅ่ฉๆธฌ
* taide-bench
* ่ฉๆธฌ่ณๆ
* ๅฏซๆ็ซ ใๅฏซไฟกใๆ่ฆใ่ฑ็ฟปไธญใไธญ็ฟป่ฑ๏ผๅ
ฑ500้ก
* ่ณๆ้ฃ็ต: [taide-bench](https://huggingface.co/datasets/taide/taide-bench)
* ่ฉๆธฌๆนๆณ
* gpt4่ฉๅ
* ่ฉๅ็จๅผ: [taide-bench-eval](https://github.com/taide-taiwan/taide-bench-eval)
* ่ฉๆธฌๅๆธ
| ๆจกๅ | ไธญ็ฟป่ฑ | ่ฑ็ฟปไธญ | ๆ่ฆ | ๅฏซๆ็ซ | ๅฏซไฟก | ๅนณๅ |
| --- | ----- | ----- | ---- | ---- | ---- | --- |
| Llama3-TAIDE-LX-8B-Chat-Alpha1 | 7.770 | 8.280 | 8.495 | 9.605 | 8.950 | 8.620 |
| GPT3.5 | 8.880 | 8.810 | 7.450 | 9.490 | 8.750 | 8.676 |
| TAIDE-LX-7B-Chat | 7.165 | 7.685 | 7.720 | 9.635 | 9.110 | 8.263 |
| LLAMA2 7B | 6.075 | 4.475 | 5.905 | 2.625 | 3.040 | 4.424 |
| LLAMA2 13B | 6.480 | 6.135 | 6.110 | 2.565 | 3.000 | 4.858 |
| LLAMA2 70B | 6.975 | 6.375 | 6.795 | 2.625 | 2.990 | 5.152 |
# ๆๆฌๆขๆฌพ
* [Llama3-TAIDE ๆจกๅ็คพ็พคๆๆฌๅๆๆธ](https://drive.google.com/file/d/12-Q0WWSjG0DW6CqJQm_jr5wUGRLeb-8p/view)
# ๅ
่ฒฌ่ฒๆ
* LLM ๆจกๅ็ฑๆผ่จญ่จๆถๆง็้ๅถ๏ผไปฅๅ่ณๆ้ฃๅ
ๆๅ่ชค๏ผ่ช่จๆจกๅ็ไปปไฝๅๆไธไปฃ่กจ TAIDE ็ซๅ ด๏ผไฝฟ็จๅ้่ฆ้กๅคๅ ๅ
ฅๅฎๅ
จ้ฒ่ญทๆฉๅถ๏ผไธๅๆๅ
งๅฎนไนๅฏ่ฝๅ
ๅซไธๆญฃ็ขบ็่ณ่จ๏ผไฝฟ็จ่
่ซๅฟ็กไฟกใ
# ้็ผๅ้
* [https://taide.tw/index/teamList](https://taide.tw/index/teamList)
# ็ธ้้ฃ็ต
* [TAIDEๅฎ็ถฒ](https://taide.tw/index)
* [TAIDE Huggingface](https://huggingface.co/taide)
* [TAIDE Github](https://github.com/taide-taiwan)
* [Kuwa AI](https://kuwaai.org/)
# Citation
* [TAIDEๅฎ็ถฒ](https://taide.tw/index) | {"license": "other"} | chienweichang/Llama3-TAIDE-LX-8B-Chat-Alpha1-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-02T02:47:53+00:00 |
text-generation | transformers | Quantizations of https://huggingface.co/HuggingFaceH4/zephyr-7b-beta
# From original readme
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **License:** MIT
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
... | {"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "zephyr-7b-beta"], "pipeline_tag": "text-generation", "inference": false} | duyntnet/zephyr-7b-beta-imatrix-GGUF | null | [
"transformers",
"gguf",
"imatrix",
"zephyr-7b-beta",
"text-generation",
"en",
"license:other",
"region:us"
] | null | 2024-05-02T02:49:23+00:00 |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐งจ diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | rubbrband/sdxlHK_v097bCOSXL | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | 2024-05-02T02:49:52+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GOLM3
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8196 | 0.09 | 10 | 1.4618 |
| 1.0741 | 0.18 | 20 | 0.5866 |
| 0.3626 | 0.27 | 30 | 0.1793 |
| 0.1587 | 0.36 | 40 | 0.1503 |
| 0.1454 | 0.45 | 50 | 0.1476 |
| 0.1461 | 0.54 | 60 | 0.1476 |
| 0.143 | 0.63 | 70 | 0.1443 |
| 0.1452 | 0.73 | 80 | 0.1424 |
| 0.1345 | 0.82 | 90 | 0.1372 |
| 0.1286 | 0.91 | 100 | 0.1308 |
| 0.131 | 1.0 | 110 | 0.1317 |
| 0.123 | 1.09 | 120 | 0.1265 |
| 0.1161 | 1.18 | 130 | 0.1272 |
| 0.1188 | 1.27 | 140 | 0.1170 |
| 0.1197 | 1.36 | 150 | 0.1149 |
| 0.1139 | 1.45 | 160 | 0.1146 |
| 0.1122 | 1.54 | 170 | 0.1117 |
| 0.1093 | 1.63 | 180 | 0.1098 |
| 0.1093 | 1.72 | 190 | 0.1085 |
| 0.1096 | 1.81 | 200 | 0.1068 |
| 0.1086 | 1.9 | 210 | 0.1064 |
| 0.107 | 1.99 | 220 | 0.1065 |
| 0.0961 | 2.08 | 230 | 0.1069 |
| 0.0946 | 2.18 | 240 | 0.1061 |
| 0.0883 | 2.27 | 250 | 0.1048 |
| 0.0933 | 2.36 | 260 | 0.1039 |
| 0.0904 | 2.45 | 270 | 0.1058 |
| 0.0862 | 2.54 | 280 | 0.1057 |
| 0.0818 | 2.63 | 290 | 0.1047 |
| 0.0852 | 2.72 | 300 | 0.1046 |
| 0.0895 | 2.81 | 310 | 0.1048 |
| 0.0914 | 2.9 | 320 | 0.1044 |
| 0.0909 | 2.99 | 330 | 0.1033 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "GOLM3", "results": []}]} | Litzy619/GOLM3 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-02T02:50:12+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GOLM2
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7996 | 0.09 | 10 | 1.4594 |
| 0.9949 | 0.18 | 20 | 0.5804 |
| 0.3011 | 0.27 | 30 | 0.1728 |
| 0.1527 | 0.36 | 40 | 0.1498 |
| 0.1447 | 0.45 | 50 | 0.1491 |
| 0.1451 | 0.54 | 60 | 0.1476 |
| 0.142 | 0.63 | 70 | 0.1447 |
| 0.1422 | 0.73 | 80 | 0.1415 |
| 0.1304 | 0.82 | 90 | 0.1346 |
| 0.1241 | 0.91 | 100 | 0.1294 |
| 0.1263 | 1.0 | 110 | 0.1294 |
| 0.1163 | 1.09 | 120 | 0.1235 |
| 0.1091 | 1.18 | 130 | 0.1217 |
| 0.1143 | 1.27 | 140 | 0.1201 |
| 0.1131 | 1.36 | 150 | 0.1149 |
| 0.1127 | 1.45 | 160 | 0.1158 |
| 0.1087 | 1.54 | 170 | 0.1139 |
| 0.1086 | 1.63 | 180 | 0.1125 |
| 0.1069 | 1.72 | 190 | 0.1120 |
| 0.1027 | 1.81 | 200 | 0.1119 |
| 0.1037 | 1.9 | 210 | 0.1109 |
| 0.1072 | 1.99 | 220 | 0.1116 |
| 0.0896 | 2.08 | 230 | 0.1104 |
| 0.0918 | 2.18 | 240 | 0.1096 |
| 0.0828 | 2.27 | 250 | 0.1071 |
| 0.0861 | 2.36 | 260 | 0.1080 |
| 0.0853 | 2.45 | 270 | 0.1093 |
| 0.0809 | 2.54 | 280 | 0.1084 |
| 0.0782 | 2.63 | 290 | 0.1076 |
| 0.0814 | 2.72 | 300 | 0.1072 |
| 0.0849 | 2.81 | 310 | 0.1074 |
| 0.0838 | 2.9 | 320 | 0.1070 |
| 0.0864 | 2.99 | 330 | 0.1067 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "GOLM2", "results": []}]} | Litzy619/GOLM2 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-02T02:50:12+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ikno/rinko_300_labeling_lr_1e-5 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:50:14+00:00 |
null | null | {} | Xrunner/hive-p | null | [
"region:us"
] | null | 2024-05-02T02:50:38+00:00 |
|
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GOLM1
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7996 | 0.09 | 10 | 1.4084 |
| 0.9949 | 0.18 | 20 | 0.5027 |
| 0.3011 | 0.27 | 30 | 0.1578 |
| 0.1527 | 0.36 | 40 | 0.1481 |
| 0.1447 | 0.45 | 50 | 0.1469 |
| 0.1451 | 0.54 | 60 | 0.1464 |
| 0.142 | 0.63 | 70 | 0.1422 |
| 0.1422 | 0.73 | 80 | 0.1372 |
| 0.1304 | 0.82 | 90 | 0.1289 |
| 0.1241 | 0.91 | 100 | 0.1269 |
| 0.1263 | 1.0 | 110 | 0.1302 |
| 0.1163 | 1.09 | 120 | 0.1185 |
| 0.1091 | 1.18 | 130 | 0.1211 |
| 0.1143 | 1.27 | 140 | 0.1143 |
| 0.1131 | 1.36 | 150 | 0.1113 |
| 0.1127 | 1.45 | 160 | 0.1115 |
| 0.1087 | 1.54 | 170 | 0.1073 |
| 0.1086 | 1.63 | 180 | 0.1064 |
| 0.1069 | 1.72 | 190 | 0.1053 |
| 0.1027 | 1.81 | 200 | 0.1047 |
| 0.1037 | 1.9 | 210 | 0.1022 |
| 0.1072 | 1.99 | 220 | 0.1029 |
| 0.0896 | 2.08 | 230 | 0.1056 |
| 0.0918 | 2.18 | 240 | 0.1024 |
| 0.0828 | 2.27 | 250 | 0.1026 |
| 0.0861 | 2.36 | 260 | 0.1022 |
| 0.0853 | 2.45 | 270 | 0.1049 |
| 0.0809 | 2.54 | 280 | 0.1028 |
| 0.0782 | 2.63 | 290 | 0.1021 |
| 0.0814 | 2.72 | 300 | 0.1021 |
| 0.0849 | 2.81 | 310 | 0.1019 |
| 0.0838 | 2.9 | 320 | 0.1016 |
| 0.0864 | 2.99 | 330 | 0.1016 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "GOLM1", "results": []}]} | Litzy619/GOLM1 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-02T02:51:09+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Chilean Spanish Large v3
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Mezosky/es_clinical_assistance_10k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0961
- Wer: 6.9352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2816 | 0.17 | 100 | 0.2250 | 11.2827 |
| 0.1505 | 0.34 | 200 | 0.1479 | 9.8196 |
| 0.1293 | 0.51 | 300 | 0.1350 | 72.1192 |
| 0.1221 | 0.69 | 400 | 0.1292 | 9.6825 |
| 0.141 | 0.86 | 500 | 0.1194 | 53.0899 |
| 0.0922 | 1.03 | 600 | 0.1150 | 12.0380 |
| 0.0773 | 1.2 | 700 | 0.1079 | 12.8661 |
| 0.0745 | 1.37 | 800 | 0.1036 | 67.3017 |
| 0.0699 | 1.54 | 900 | 0.1016 | 8.2697 |
| 0.0917 | 1.72 | 1000 | 0.0956 | 8.6334 |
| 0.0716 | 1.89 | 1100 | 0.0968 | 7.7997 |
| 0.0441 | 2.06 | 1200 | 0.0946 | 8.3760 |
| 0.0377 | 2.23 | 1300 | 0.0963 | 7.6178 |
| 0.0417 | 2.4 | 1400 | 0.0951 | 7.5703 |
| 0.0409 | 2.57 | 1500 | 0.0926 | 7.2681 |
| 0.0356 | 2.74 | 1600 | 0.0912 | 6.8933 |
| 0.0361 | 2.92 | 1700 | 0.0918 | 7.0835 |
| 0.0215 | 3.09 | 1800 | 0.0938 | 6.9548 |
| 0.018 | 3.26 | 1900 | 0.0960 | 6.6415 |
| 0.0196 | 3.43 | 2000 | 0.0961 | 6.9352 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"language": ["es"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["Mezosky/es_clinical_assistance_10k"], "metrics": ["wer"], "base_model": "openai/whisper-large-v3", "model-index": [{"name": "Whisper Chilean Spanish Large v3", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Mezosky/es_clinical_assistance_10k", "type": "Mezosky/es_clinical_assistance_10k"}, "metrics": [{"type": "wer", "value": 6.935235697300322, "name": "Wer"}]}]}]} | Mezosky/whisper-large-v3-clinical-assistance | null | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"es",
"dataset:Mezosky/es_clinical_assistance_10k",
"base_model:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:51:32+00:00 |
null | null | {} | foureyednymph/portals-large-lora-pop-colab | null | [
"region:us"
] | null | 2024-05-02T02:51:40+00:00 |
|
null | null | {} | msy78/cn_sdxl__100sets | null | [
"region:us"
] | null | 2024-05-02T02:52:44+00:00 |
|
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Syed-Hasan-8503/Versatile-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "Syed-Hasan-8503/Versatile-7B", "quantized_by": "mradermacher"} | mradermacher/Versatile-7B-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Syed-Hasan-8503/Versatile-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:54:06+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper4
This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on the tiny dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5409
- Wer: 28.2719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 3.8231 | 0.2778 | 10 | 3.7088 | 76.9377 |
| 3.1925 | 0.5556 | 20 | 2.9439 | 65.5654 |
| 2.1383 | 0.8333 | 30 | 1.7221 | 61.5311 |
| 1.0671 | 1.1111 | 40 | 0.8320 | 50.6989 |
| 0.6947 | 1.3889 | 50 | 0.6587 | 41.0102 |
| 0.6263 | 1.6667 | 60 | 0.5874 | 29.7967 |
| 0.5827 | 1.9444 | 70 | 0.5402 | 27.3825 |
| 0.4222 | 2.2222 | 80 | 0.5154 | 32.0521 |
| 0.4065 | 2.5 | 90 | 0.4997 | 25.6989 |
| 0.3959 | 2.7778 | 100 | 0.4804 | 23.8247 |
| 0.3081 | 3.0556 | 110 | 0.4670 | 24.8412 |
| 0.2497 | 3.3333 | 120 | 0.4687 | 23.2846 |
| 0.2535 | 3.6111 | 130 | 0.4594 | 23.0940 |
| 0.2428 | 3.8889 | 140 | 0.4545 | 23.5070 |
| 0.1627 | 4.1667 | 150 | 0.4651 | 24.4917 |
| 0.1224 | 4.4444 | 160 | 0.4686 | 23.6976 |
| 0.1326 | 4.7222 | 170 | 0.4653 | 23.6976 |
| 0.1334 | 5.0 | 180 | 0.4741 | 24.7459 |
| 0.0659 | 5.2778 | 190 | 0.4792 | 24.6823 |
| 0.0639 | 5.5556 | 200 | 0.4760 | 33.3863 |
| 0.0667 | 5.8333 | 210 | 0.4820 | 25.4765 |
| 0.042 | 6.1111 | 220 | 0.4933 | 29.4155 |
| 0.0325 | 6.3889 | 230 | 0.5066 | 29.9873 |
| 0.0333 | 6.6667 | 240 | 0.5126 | 26.0801 |
| 0.0333 | 6.9444 | 250 | 0.5073 | 24.6188 |
| 0.0187 | 7.2222 | 260 | 0.5129 | 27.3507 |
| 0.0214 | 7.5 | 270 | 0.5209 | 28.2084 |
| 0.0187 | 7.7778 | 280 | 0.5213 | 29.3202 |
| 0.0312 | 8.0556 | 290 | 0.5274 | 34.6569 |
| 0.0172 | 8.3333 | 300 | 0.5409 | 28.2719 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1.dev0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-tiny.en", "model-index": [{"name": "whisper4", "results": []}]} | khaingsmon/whisper4 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-tiny.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:54:26+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | vkrishanan569/tinyllamaforlaw | null | [
"transformers",
"pytorch",
"gguf",
"llama",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T02:55:12+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** xkiwilabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | xkiwilabs/lora_opLLama3_modelv7 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:55:36+00:00 |
text-generation | transformers | {} | vkrishanan569/llama2_for_law | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T02:55:57+00:00 |
|
null | null | {} | foureyednymph/portals-large-lora-melody-colab | null | [
"region:us"
] | null | 2024-05-02T02:56:28+00:00 |
|
text-to-audio | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ceb_b128_le5_s4000
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:--------:|:----:|:---------------:|
| 0.5272 | 39.2157 | 500 | 0.4584 |
| 0.4634 | 78.4314 | 1000 | 0.4220 |
| 0.4466 | 117.6471 | 1500 | 0.4104 |
| 0.4289 | 156.8627 | 2000 | 0.4017 |
| 0.4223 | 196.0784 | 2500 | 0.3969 |
| 0.4149 | 235.2941 | 3000 | 0.3960 |
| 0.4129 | 274.5098 | 3500 | 0.3962 |
| 0.4108 | 313.7255 | 4000 | 0.3943 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "ceb_b128_le5_s4000", "results": []}]} | mikhail-panzo/ceb_b128_le5_s4000 | null | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:56:37+00:00 |
text-generation | transformers |
# Uploaded model
- **Developed by:** theGhoul21
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"} | theGhoul21/srl-sft-010524-16bit-v0.2 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:58:48+00:00 |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | wuzhongyanqiu/dummy-model | null | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:59:23+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# miniCPM-2b-chatml
This model is a fine-tuned version of [openbmb/MiniCPM-2B-128k](https://huggingface.co/openbmb/MiniCPM-2B-128k) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.2
- Pytorch 2.1.2+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "openbmb/MiniCPM-2B-128k", "model-index": [{"name": "miniCPM-2b-chatml", "results": []}]} | DuongTrongChi/miniCPM-2b-chatml | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:openbmb/MiniCPM-2B-128k",
"region:us"
] | null | 2024-05-02T02:59:31+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | DuongTrongChi/minicpm-test | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:59:41+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Weyaxi/Bagel-Hermes-34B-Slerp
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "Weyaxi/Bagel-Hermes-34B-Slerp", "license_link": "https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE", "license_name": "yi-license", "quantized_by": "mradermacher"} | mradermacher/Bagel-Hermes-34B-Slerp-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Weyaxi/Bagel-Hermes-34B-Slerp",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:02:01+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0001_withdpo_4iters_bs256_5101lr_iter_4
This model is a fine-tuned version of [ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3](https://huggingface.co/ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3", "model-index": [{"name": "0.0001_withdpo_4iters_bs256_5101lr_iter_4", "results": []}]} | ShenaoZ/0.0001_withdpo_4iters_bs256_5101lr_iter_4 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T03:02:08+00:00 |
null | null | {} | Yicheng1215/HAN462 | null | [
"region:us"
] | null | 2024-05-02T03:03:15+00:00 |
|
null | null | {} | tegs001/retrieval_safetensors | null | [
"region:us"
] | null | 2024-05-02T03:04:03+00:00 |
|
text-generation | transformers |
I'm back and doing well! I've got a job in the field now, so we'll see in the long run how that effects my open source output.
Here we have a 11b Llama 3 instruct model for future work.
EDIT: Made a yaml mistake with part funnel, but it still works well.
---

This is a merge stock of 3 models:
- Part Wave
- Part Block
- Part Funnel
With Part Funnel as the base.
---
Part Wave:
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [0, 12]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [8, 18]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [13, 23]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [18, 32]
---
Part Block:
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [0, 15]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [8, 23]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [16, 32]
---
Part Funnel:
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [0, 15]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [14, 14]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [13, 13]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [12, 12]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [11, 11]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [10, 10]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [9, 9]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [8, 23]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [22, 22]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [21, 21]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [20, 20]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [19, 19]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [18, 18]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [17, 17]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [16, 32] | {"license": "llama3"} | athirdpath/Llama-3-11b-Instruct | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T03:04:58+00:00 |
text-classification | transformers | We have finetuned BERT base on Brown corpus to predict if a text was written using a non-fiction straight forward style or a fictional descriptive style. Here fiction and non-fiction only refers to the writing style and not to factual correctness of whats written in the text. For more details, please see this blog and the referenced paper: https://bekushal.medium.com/fictometer-a-simple-and-explainable-algorithm-for-sentiment-analysis-31186d2a8c7e
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# Load the model
model_name = "bekushal/FictoBERT"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Example input text
input_text = "It is a sunny day with a nice wind blowing and I am feeling very happy."
# Preprocess the input
inputs = tokenizer(input_text, return_tensors="pt", truncation=True, padding=True)
# Perform inference
outputs = model(**inputs)
# Get predicted class probabilities
predicted_probabilities = outputs.logits.softmax(dim=-1)
# Get predicted class label
predicted_label = predicted_probabilities.argmax().item()
# Convert predicted label to human-readable format
predicted_class = "fiction" if predicted_label == 1 else "non-fiction"
# Display results
print("Predicted class:", predicted_class)
print("Predicted class probabilities [non-fiction, fiction]:", predicted_probabilities)```
---
license: apache-2.0
---
| {} | bekushal/FictoBERT | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:05:30+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** jurieyel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/tinyllama-bnb-4bit"} | jurieyel/text2sql-tinyllama-4bit | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:05:49+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | jurieyel/text2sql-tinyllama-4bit-tokenizer | null | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:05:59+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1406
- F1: 0.8628
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.26 | 1.0 | 525 | 0.1569 | 0.8259 |
| 0.1256 | 2.0 | 1050 | 0.1514 | 0.8406 |
| 0.0786 | 3.0 | 1575 | 0.1406 | 0.8628 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de", "results": []}]} | gopidon/xlm-roberta-base-finetuned-panx-de | null | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:06:22+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | hi000000/insta_llama2-koen | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:06:25+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mi-super-modelo
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6450
- Accuracy: 0.125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6111 | 0.5 | 5 | 1.6301 | 0.175 |
| 1.6035 | 1.0 | 10 | 1.6450 | 0.125 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "mi-super-modelo", "results": []}]} | miguelpr/mi-super-modelo | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:06:40+00:00 |
text-generation | transformers | Experimental: Tiefighter Expanded to 15.6 B / 48 layers.
# D_AU-Tiefighter-Expanded-13B-pass
D_AU-Tiefighter-Expanded-13B-pass is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter)
* [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: KoboldAI/LLaMA2-13B-Tiefighter
layer_range: [0, 40]
- sources:
- model: KoboldAI/LLaMA2-13B-Tiefighter
layer_range: [32, 40]
merge_method: passthrough
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DavidAU/D_AU-Tiefighter-Expanded-13B-pass"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "KoboldAI/LLaMA2-13B-Tiefighter"], "base_model": ["KoboldAI/LLaMA2-13B-Tiefighter", "KoboldAI/LLaMA2-13B-Tiefighter"]} | DavidAU/D_AU-Tiefighter-Expanded-13B-pass | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"KoboldAI/LLaMA2-13B-Tiefighter",
"base_model:KoboldAI/LLaMA2-13B-Tiefighter",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T03:07:53+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S1_InstructionGeneratorDelta
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 173 | 0.1487 |
| No log | 2.0 | 346 | 0.1222 |
| 0.3431 | 3.0 | 519 | 0.1085 |
| 0.3431 | 4.0 | 692 | 0.0998 |
| 0.3431 | 5.0 | 865 | 0.0949 |
| 0.1133 | 6.0 | 1038 | 0.0911 |
| 0.1133 | 7.0 | 1211 | 0.0889 |
| 0.1133 | 8.0 | 1384 | 0.0873 |
| 0.1007 | 9.0 | 1557 | 0.0855 |
| 0.1007 | 10.0 | 1730 | 0.0842 |
| 0.1007 | 11.0 | 1903 | 0.0831 |
| 0.095 | 12.0 | 2076 | 0.0823 |
| 0.095 | 13.0 | 2249 | 0.0817 |
| 0.095 | 14.0 | 2422 | 0.0810 |
| 0.0913 | 15.0 | 2595 | 0.0807 |
| 0.0913 | 16.0 | 2768 | 0.0803 |
| 0.0913 | 17.0 | 2941 | 0.0800 |
| 0.0892 | 18.0 | 3114 | 0.0797 |
| 0.0892 | 19.0 | 3287 | 0.0794 |
| 0.0892 | 20.0 | 3460 | 0.0793 |
| 0.0879 | 21.0 | 3633 | 0.0791 |
| 0.0879 | 22.0 | 3806 | 0.0790 |
| 0.0879 | 23.0 | 3979 | 0.0790 |
| 0.0868 | 24.0 | 4152 | 0.0789 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilgpt2", "model-index": [{"name": "S1_InstructionGeneratorDelta", "results": []}]} | gjonesQ02/S1_InstructionGeneratorDelta | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T03:08:02+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** theGhoul21
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"} | theGhoul21/srl-sft-010524-Q8_0-v0.2 | null | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:09:35+00:00 |
text-generation | allennlp | {"language": ["en", "ar"], "license": "apache-2.0", "library_name": "allennlp", "tags": ["finance", "legal"], "metrics": ["google_bleu"], "pipeline_tag": "text-generation"} | onlyonekinginxxxcommunity/ARTICWOLFGLOBAL | null | [
"allennlp",
"finance",
"legal",
"text-generation",
"en",
"ar",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T03:09:55+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisperFinetuneFinal
This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5711
- Wer: 22.2920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 2.8037 | 0.2778 | 10 | 1.8972 | 34.0973 |
| 0.8101 | 0.5556 | 20 | 0.6863 | 28.6970 |
| 0.5778 | 0.8333 | 30 | 0.5491 | 22.8885 |
| 0.4375 | 1.1111 | 40 | 0.4962 | 20.7535 |
| 0.3437 | 1.3889 | 50 | 0.4767 | 20.2512 |
| 0.3277 | 1.6667 | 60 | 0.4921 | 21.0047 |
| 0.3431 | 1.9444 | 70 | 0.4972 | 20.4082 |
| 0.1331 | 2.2222 | 80 | 0.5317 | 25.7143 |
| 0.1385 | 2.5 | 90 | 0.5308 | 20.4396 |
| 0.146 | 2.7778 | 100 | 0.5711 | 22.2920 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1.dev0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-tiny.en", "model-index": [{"name": "whisperFinetuneFinal", "results": []}]} | shljessie/whisperFinetuneFinal | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-tiny.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:13:10+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-base-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask
This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window8-256](https://huggingface.co/microsoft/swinv2-base-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3463
- Accuracy: 0.8463
- Recall: 0.8463
- F1: 0.8464
- Precision: 0.8482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| No log | 0.9974 | 293 | 0.6222 | 0.7901 | 0.7901 | 0.7737 | 0.7747 |
| No log | 1.9983 | 587 | 0.4901 | 0.8063 | 0.8063 | 0.7998 | 0.8066 |
| No log | 2.9991 | 881 | 0.4374 | 0.8225 | 0.8225 | 0.8170 | 0.8356 |
| No log | 4.0 | 1175 | 0.4262 | 0.8340 | 0.8340 | 0.8270 | 0.8541 |
| No log | 4.9974 | 1468 | 0.4079 | 0.8310 | 0.8310 | 0.8290 | 0.8379 |
| No log | 5.9983 | 1762 | 0.4117 | 0.8370 | 0.8370 | 0.8361 | 0.8509 |
| No log | 6.9991 | 2056 | 0.3807 | 0.8370 | 0.8370 | 0.8361 | 0.8416 |
| No log | 8.0 | 2350 | 0.3419 | 0.8595 | 0.8595 | 0.8583 | 0.8609 |
| No log | 8.9974 | 2643 | 0.3628 | 0.8438 | 0.8438 | 0.8424 | 0.8448 |
| 0.4492 | 9.9745 | 2930 | 0.3638 | 0.8399 | 0.8399 | 0.8394 | 0.8410 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0a0+81ea7a4
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy", "recall", "f1", "precision"], "base_model": "microsoft/swinv2-base-patch4-window8-256", "model-index": [{"name": "swinv2-base-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8463175819497658, "name": "Accuracy"}, {"type": "recall", "value": 0.8463175819497658, "name": "Recall"}, {"type": "f1", "value": 0.8463640211224454, "name": "F1"}, {"type": "precision", "value": 0.8481964005333177, "name": "Precision"}]}]}]} | Kushagra07/swinv2-base-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask | null | [
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-base-patch4-window8-256",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:14:33+00:00 |
automatic-speech-recognition | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | JunWorks/Quantized_4bit_WhisperSmallOri_FP16 | null | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-05-02T03:18:14+00:00 |
null | diffusers | {} | CHU1226/sd-class-butterflies-32 | null | [
"diffusers",
"safetensors",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-05-02T03:18:46+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** theGhoul21
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"} | theGhoul21/srl-sft-010524-gguf-16bit-v0.2 | null | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:18:54+00:00 |
text2text-generation | transformers | {"license": "mit"} | hemanthlepcha0/grammar-correction | null | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:20:04+00:00 |
|
null | transformers |
# Llama-3-KoEn-8B-xtuner-llava-preview ๐
<!-- Provide a quick summary of what the model is/does. -->
Llama-3-KoEn-8B-xtuner-llava-preview ๐ is Korean based MutliModal based on Llava architecture, merged with [ChatVector](https://arxiv.org/abs/2310.04799) methods leveraging 2 models:
1) [beomi/Llama-3-KoEn-8B-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview)
2) [xtuner/llava-llama-3-8b-transformers](https://huggingface.co/xtuner/llava-llama-3-8b-transformers)
## Model Details
### Model Description
- **Developed by:** Junbum Lee (Beomi)
- **Model type:** HuggingFace Llava ๐
- **Language(s) (NLP):** Korean, English
- **License:** cc-by-nc-sa-4.0 under Llama3 License
- **Merged from model:** [beomi/Llama-3-KoEn-8B-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview) & [xtuner/llava-llama-3-8b-transformers](https://huggingface.co/xtuner/llava-llama-3-8b-transformers)
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

> Two version recommended
>
> v1. `revision='a38aac3'`: Basic ChatVector, with [25B+ trained KoEn ckpt(rev. d4d25a2)](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview/commit/d4d25a2).
>
> v1-1. `revision='0224971'`: Basic ChatVector, with [40B+ trained KoEn ckpt(rev. ad39b32)](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview/commit/ad39b32cd4207f37f61f16e79d3f4020c5b744ef).
>
> v2. `revision='4f04d1e'`: Model diff based merging(ref. https://huggingface.co/blog/maywell/llm-feature-transfer), with [25B+ trained KoEn ckpt(rev. d4d25a2)](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview/commit/d4d25a2).
```python
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "beomi/Llama-3-KoEn-8B-xtuner-llava-preview"
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype='auto',
device_map='auto',
revision='a38aac3', # 'a38aac3' for basic ChatVector, '4f04d1e' for Model diff based merging(ref. https://huggingface.co/blog/maywell/llm-feature-transfer)
)
processor = AutoProcessor.from_pretrained(model_id)
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('./llava-llama-3-KoEn-8b-v1_1-transformers')
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
prompt = ("<|start_header_id|>user<|end_header_id|>\n\n<image>\n์ด ์ด๋ฏธ์ง์ ๋ํด์ ์ค๋ช
ํด์ฃผ์ธ์.<|eot_id|>"
"<|start_header_id|>assistant<|end_header_id|>\n\n์ด ์ด๋ฏธ์ง์๋")
image_file = "https://cdn-uploads.huggingface.co/production/uploads/5e56829137cb5b49818287ea/NWfoArWI4UPAxpEnolkwT.jpeg"
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)
output = model.generate(**inputs, max_new_tokens=400, do_sample=True, eos_token_id=terminators,)
print(processor.decode(output[0][2:], skip_special_tokens=False))
# --- Example Output [v1, Chat Vector] ---
user<|end_header_id|>
<image>
์ด ์ด๋ฏธ์ง์ ๋ํด์ ์ค๋ช
ํด์ฃผ์ธ์.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
์ด ์ด๋ฏธ์ง์๋ ๊ณ ์์ด ํ ๋ง๋ฆฌ๊ฐ ๊ฐ๋ฌผ ์๋ฅผ ๊ฑธ์ด๊ฐ๋ ๋ชจ์ต์ด ๋ณด์ฌ์ง๋๋ค. ๊ณ ์์ด๋ ๊ฐ๋ฌผ์ ์๋ฌผ๊ฒฐ์ ๋ฏธ๋๋ผ์ ํ๊ณ ๊ฐ ๊ฐ๋ก๋ฅผ ์ง๋๋ ๋ฐ ๋ฅ์ํ๊ฒ ๋ณด์
๋๋ค. ๊ณ ์์ด์ ๋ฐ์ ๊ฐ๋ฌผ๋ก ์ ๋ค์ด๊ฐ, ๊ทธ๊ฒ์ ์ฆ๊ธฐ๋ฉฐ ๊ฑธ์ด๊ฐ๋๋ค.
๋ํ ์ด ์ด๋ฏธ์ง๋ ์์ฑ ๋
น์์ ํ๊ฑฐ๋ ๋
นํ๋ ์๋ฃ๋ก ์ ์๋์์ผ๋ฉฐ, ์ฃผ๋ก ๊ณ ์์ด์ ๋ชจ์ต์ ๊ฐํ๊ฒ ๋ณด์ฌ์ค๋๋ค. ์๋ฆฌ ํจ๊ณผ๋ ์ฌ๋ฌ ๊ฐ์ง๋ก ์ถ๊ฐํ์ฌ ๊ณ ์์ด์ ์คํ ๋ฆฌ๋ฅผ ๋ค์ํ๊ฒ ์ ๋ฌํฉ๋๋ค. ๊ฐ๋ฌผ์ ์๋ฌผ๊ฒฐ์ ๋ํ๋ด๋ฉฐ ๊ฐ๋ฌผ ์๋ฅผ ๊ฑท๋ ๊ณ ์์ด์ ๋ชจ์ต์ ๋์ฑ ๊ฐ๋ ฌํ๊ฒ ๊ฐ์กฐํ๊ธฐ ์ํด ์๋ฌผ๊ฒฐ์ ํตํด ๋ ๋ํ
์ผํ ์ฅ๋ฉด์ ๋ณด์ฌ์ค๋๋ค.<|eot_id|>
# --- Example Output [v1-1, Chat Vector] ---
user<|end_header_id|>
<image>
์ด ์ด๋ฏธ์ง์ ๋ํด์ ์ค๋ช
ํด์ฃผ์ธ์.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
์ด ์ด๋ฏธ์ง์์๋ ํ ๊ณ ์์ด๊ฐ ์ํด์์ ์์นํ ๋ฐ๋ค๋ฅผ ๊ฑท๊ณ ์๋ ๋ชจ์ต์ ๋ณผ ์ ์์ต๋๋ค. ๊ณ ์์ด๋ ํด๋ณ์์๋ถํฐ ๋ฐ๋ค๋ก ๊ฑธ์ด๋ค์ด๊ฐ๋ ์ค์ด๋ฉฐ, ์ฃผ๋ณ์๋ ์์ํ ํ๋๊ฐ ๋ฐ๋ ค์ค๋ ๋ชจ์ต์ ๋ณด์ฌ์ฃผ๊ณ ์์ต๋๋ค. ์ด ๊ณ ์์ด๋ ํ์ด๋ ๋๋ถํฐ ๊ณ ์์ด์ ๊ฐ์์ง์๋ ๋ค๋ฅด๊ฒ ๋ฐ๋ค๋ฅผ ๊ฒฝํํ๊ณ , ์ ์ํด๊ฐ๊ณ ์์ต๋๋ค. ๊ณ ์์ด๋ ๋ฐ๋ค๋ฅผ ์ข์ํ๊ณ , ์ด ํ๊ฒฝ์์ ํ๋ณต์ ๋๋ผ๋ ๊ฒ ๊ฐ์ต๋๋ค. ์ด ๊ณ ์์ด๋ ์ธ๊ฐ์ด ์๋ ์์ฐ์ ์ผ๋ถ๋ก์จ ์ด ํ๊ฒฝ์์ ์ด์๊ฐ๊ณ ์์ต๋๋ค.<|eot_id|>
# --- Example Output [v2, Model diff based merging] ---
user<|end_header_id|>
<image>
์ด ์ด๋ฏธ์ง์ ๋ํด์ ์ค๋ช
ํด์ฃผ์ธ์.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
์ด ์ด๋ฏธ์ง์๋ ํ๊ตญ์ด ์๋ง๊ณผ ํจ๊ป ๊ณ ์์ด๊ฐ ๋ฌผ์ ๋ฐ์ ๋๋๊ณ ๊ฑท๋ ๋ชจ์ต์ด ๋ด๊ฒจ ์์ต๋๋ค. ๊ณ ์์ด๋ ์ค๋ฅธ์ชฝ ๋ฐ์ ๋ฌผ์ ๋ด๊ทธ๊ณ ๊ฑท๋ ์ค์ด๋ฉฐ, ํ๊ตญ์ด ์๋ง์ "๊ณ ์์ด๋ ๋ฌผ์ ์ข์ํฉ๋๋ค"๋ผ๋ ๋ฌธ์ฅ์ ํฌํจํ๊ณ ์์ต๋๋ค. ์ด ์๋ง์ ๊ณ ์์ด๊ฐ ๋ฌผ์ ์ข์ํ๋ ๊ฒ์ ๊ฐ์กฐํ๊ณ ์์ต๋๋ค.<|eot_id|>
```
| {"language": ["ko", "en"], "license": "cc-by-nc-sa-4.0", "library_name": "transformers"} | beomi/Llama-3-KoEn-8B-xtuner-llava-preview | null | [
"transformers",
"safetensors",
"llava",
"pretraining",
"ko",
"en",
"arxiv:2310.04799",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:21:54+00:00 |
text-generation | transformers |
# Llama-3-OpenBioMed-8B-slerp-v0.2
Llama-3-OpenBioMed-8B-slerp-v0.2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B)
* [johnsnowlabs/JSL-MedLlama-3-8B-v2.0](https://huggingface.co/johnsnowlabs/JSL-MedLlama-3-8B-v2.0)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: aaditya/Llama3-OpenBioLLM-8B
layer_range: [0, 32]
- model: johnsnowlabs/JSL-MedLlama-3-8B-v2.0
layer_range: [0, 32]
merge_method: slerp
base_model: johnsnowlabs/JSL-MedLlama-3-8B-v2.0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "abhinand/Llama-3-OpenBioMed-8B-slerp-v0.2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "aaditya/Llama3-OpenBioLLM-8B", "johnsnowlabs/JSL-MedLlama-3-8B-v2.0"], "base_model": ["aaditya/Llama3-OpenBioLLM-8B", "johnsnowlabs/JSL-MedLlama-3-8B-v2.0"]} | abhinand/Llama-3-OpenBioMed-8B-slerp-v0.2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"aaditya/Llama3-OpenBioLLM-8B",
"johnsnowlabs/JSL-MedLlama-3-8B-v2.0",
"base_model:aaditya/Llama3-OpenBioLLM-8B",
"base_model:johnsnowlabs/JSL-MedLlama-3-8B-v2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T03:22:03+00:00 |
null | null | {} | syannn/spammer | null | [
"region:us"
] | null | 2024-05-02T03:22:53+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | TinyPixel/llama-3-adapter3 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:23:03+00:00 |
text-generation | null |
## Llamacpp imatrix Quantizations of Hermes-2-Pro-Llama-3-8B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization.
Original model: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Hermes-2-Pro-Llama-3-8B-Q8_0.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Hermes-2-Pro-Llama-3-8B-Q6_K.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Hermes-2-Pro-Llama-3-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Hermes-2-Pro-Llama-3-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Hermes-2-Pro-Llama-3-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Hermes-2-Pro-Llama-3-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Hermes-2-Pro-Llama-3-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Hermes-2-Pro-Llama-3-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Hermes-2-Pro-Llama-3-8B-IQ3_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Hermes-2-Pro-Llama-3-8B-IQ3_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Hermes-2-Pro-Llama-3-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Hermes-2-Pro-Llama-3-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Hermes-2-Pro-Llama-3-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Hermes-2-Pro-Llama-3-8B-Q2_K.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Hermes-2-Pro-Llama-3-8B-IQ2_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Hermes-2-Pro-Llama-3-8B-IQ2_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Hermes-2-Pro-Llama-3-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Hermes-2-Pro-Llama-3-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Hermes-2-Pro-Llama-3-8B-IQ1_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Hermes-2-Pro-Llama-3-8B-IQ1_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
| {"language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "NousResearch/Meta-Llama-3-8B", "widget": [{"example_title": "Hermes 2 Pro", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "quantized_by": "bartowski", "pipeline_tag": "text-generation", "model-index": [{"name": "Hermes-2-Pro-Llama-3-8B", "results": []}]} | bartowski/Hermes-2-Pro-Llama-3-8B-GGUF | null | [
"gguf",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"text-generation",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Meta-Llama-3-8B",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T03:23:42+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ryanyeo/kirnect-Llama-3-Open-Ko-8B-Instruct-preview-0501-5150step-8batch_5epoch | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T03:24:02+00:00 |
null | null | {} | ppxscal/cv_project | null | [
"region:us"
] | null | 2024-05-02T03:27:32+00:00 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.